00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2380 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3641 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.014 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.015 The recommended git tool is: git 00:00:00.016 using credential 00000000-0000-0000-0000-000000000002 00:00:00.022 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.037 Fetching changes from the remote Git repository 00:00:00.039 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.055 Using shallow fetch with depth 1 00:00:00.055 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.055 > git --version # timeout=10 00:00:00.083 > git --version # 'git version 2.39.2' 00:00:00.083 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.119 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.119 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.336 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.345 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.356 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:02.356 > git config core.sparsecheckout # timeout=10 00:00:02.365 > git read-tree -mu HEAD # timeout=10 00:00:02.377 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:02.393 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:02.394 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:02.572 [Pipeline] Start of Pipeline 00:00:02.585 [Pipeline] library 00:00:02.587 Loading library shm_lib@master 00:00:02.588 Library shm_lib@master is cached. Copying from home. 00:00:02.606 [Pipeline] node 00:00:02.637 Running on VM-host-SM9 in /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:02.639 [Pipeline] { 00:00:02.649 [Pipeline] catchError 00:00:02.650 [Pipeline] { 00:00:02.664 [Pipeline] wrap 00:00:02.674 [Pipeline] { 00:00:02.683 [Pipeline] stage 00:00:02.685 [Pipeline] { (Prologue) 00:00:02.705 [Pipeline] echo 00:00:02.707 Node: VM-host-SM9 00:00:02.712 [Pipeline] cleanWs 00:00:02.724 [WS-CLEANUP] Deleting project workspace... 00:00:02.724 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.730 [WS-CLEANUP] done 00:00:02.922 [Pipeline] setCustomBuildProperty 00:00:03.004 [Pipeline] httpRequest 00:00:03.327 [Pipeline] echo 00:00:03.328 Sorcerer 10.211.164.20 is alive 00:00:03.338 [Pipeline] retry 00:00:03.339 [Pipeline] { 00:00:03.353 [Pipeline] httpRequest 00:00:03.358 HttpMethod: GET 00:00:03.358 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.359 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.359 Response Code: HTTP/1.1 200 OK 00:00:03.360 Success: Status code 200 is in the accepted range: 200,404 00:00:03.360 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.506 [Pipeline] } 00:00:03.522 [Pipeline] // retry 00:00:03.530 [Pipeline] sh 00:00:03.810 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.825 [Pipeline] httpRequest 00:00:04.273 [Pipeline] echo 00:00:04.274 Sorcerer 10.211.164.20 is alive 00:00:04.283 [Pipeline] retry 00:00:04.285 [Pipeline] { 00:00:04.299 [Pipeline] httpRequest 00:00:04.306 HttpMethod: GET 00:00:04.306 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:04.307 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:04.308 Response Code: HTTP/1.1 200 OK 00:00:04.308 Success: Status code 200 is in the accepted range: 200,404 00:00:04.308 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:19.604 [Pipeline] } 00:00:19.622 [Pipeline] // retry 00:00:19.630 [Pipeline] sh 00:00:19.911 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:22.457 [Pipeline] sh 00:00:22.739 + git -C spdk log --oneline -n5 00:00:22.739 c13c99a5e test: Various fixes for Fedora40 00:00:22.739 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:22.739 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:22.739 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:22.739 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:22.759 [Pipeline] writeFile 00:00:22.777 [Pipeline] sh 00:00:23.060 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:23.072 [Pipeline] sh 00:00:23.355 + cat autorun-spdk.conf 00:00:23.355 SPDK_TEST_UNITTEST=1 00:00:23.355 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:23.355 SPDK_TEST_NVME=1 00:00:23.355 SPDK_TEST_BLOCKDEV=1 00:00:23.355 SPDK_RUN_ASAN=1 00:00:23.355 SPDK_RUN_UBSAN=1 00:00:23.355 SPDK_TEST_RAID5=1 00:00:23.355 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:23.361 RUN_NIGHTLY=1 00:00:23.363 [Pipeline] } 00:00:23.378 [Pipeline] // stage 00:00:23.395 [Pipeline] stage 00:00:23.397 [Pipeline] { (Run VM) 00:00:23.410 [Pipeline] sh 00:00:23.690 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:23.690 + echo 'Start stage prepare_nvme.sh' 00:00:23.690 Start stage prepare_nvme.sh 00:00:23.690 + [[ -n 4 ]] 00:00:23.690 + disk_prefix=ex4 00:00:23.690 + [[ -n /var/jenkins/workspace/ubuntu24-vg-autotest ]] 00:00:23.690 + [[ -e /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf ]] 00:00:23.690 + source /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf 00:00:23.690 ++ SPDK_TEST_UNITTEST=1 00:00:23.690 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:23.690 ++ SPDK_TEST_NVME=1 00:00:23.690 ++ SPDK_TEST_BLOCKDEV=1 00:00:23.690 ++ SPDK_RUN_ASAN=1 00:00:23.690 ++ SPDK_RUN_UBSAN=1 00:00:23.690 ++ SPDK_TEST_RAID5=1 00:00:23.690 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:23.690 ++ RUN_NIGHTLY=1 00:00:23.690 + cd /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:23.690 + nvme_files=() 00:00:23.690 + declare -A nvme_files 00:00:23.690 + backend_dir=/var/lib/libvirt/images/backends 00:00:23.690 + nvme_files['nvme.img']=5G 00:00:23.690 + nvme_files['nvme-cmb.img']=5G 00:00:23.690 + nvme_files['nvme-multi0.img']=4G 00:00:23.690 + nvme_files['nvme-multi1.img']=4G 00:00:23.690 + nvme_files['nvme-multi2.img']=4G 00:00:23.690 + nvme_files['nvme-openstack.img']=8G 00:00:23.690 + nvme_files['nvme-zns.img']=5G 00:00:23.690 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:23.690 + (( SPDK_TEST_FTL == 1 )) 00:00:23.690 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:23.690 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:23.690 + for nvme in "${!nvme_files[@]}" 00:00:23.690 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:23.690 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:23.690 + for nvme in "${!nvme_files[@]}" 00:00:23.690 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:23.690 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:23.690 + for nvme in "${!nvme_files[@]}" 00:00:23.690 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:23.690 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:23.690 + for nvme in "${!nvme_files[@]}" 00:00:23.690 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:23.690 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:23.690 + for nvme in "${!nvme_files[@]}" 00:00:23.690 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:23.949 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:23.949 + for nvme in "${!nvme_files[@]}" 00:00:23.949 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:23.949 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:23.949 + for nvme in "${!nvme_files[@]}" 00:00:23.949 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:23.949 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:23.949 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:23.949 + echo 'End stage prepare_nvme.sh' 00:00:23.949 End stage prepare_nvme.sh 00:00:23.962 [Pipeline] sh 00:00:24.244 + DISTRO=ubuntu2404 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:24.244 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -H -a -v -f ubuntu2404 00:00:24.503 00:00:24.503 DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant 00:00:24.503 SPDK_DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk 00:00:24.503 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu24-vg-autotest 00:00:24.503 HELP=0 00:00:24.503 DRY_RUN=0 00:00:24.503 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img, 00:00:24.503 NVME_DISKS_TYPE=nvme, 00:00:24.503 NVME_AUTO_CREATE=0 00:00:24.503 NVME_DISKS_NAMESPACES=, 00:00:24.503 NVME_CMB=, 00:00:24.503 NVME_PMR=, 00:00:24.503 NVME_ZNS=, 00:00:24.503 NVME_MS=, 00:00:24.503 NVME_FDP=, 00:00:24.503 SPDK_VAGRANT_DISTRO=ubuntu2404 00:00:24.503 SPDK_VAGRANT_VMCPU=10 00:00:24.503 SPDK_VAGRANT_VMRAM=12288 00:00:24.503 SPDK_VAGRANT_PROVIDER=libvirt 00:00:24.503 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:24.503 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:24.503 SPDK_OPENSTACK_NETWORK=0 00:00:24.503 VAGRANT_PACKAGE_BOX=0 00:00:24.503 VAGRANTFILE=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:24.503 FORCE_DISTRO=true 00:00:24.503 VAGRANT_BOX_VERSION= 00:00:24.503 EXTRA_VAGRANTFILES= 00:00:24.503 NIC_MODEL=e1000 00:00:24.503 00:00:24.503 mkdir: created directory '/var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt' 00:00:24.503 /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:27.791 Bringing machine 'default' up with 'libvirt' provider... 00:00:28.050 ==> default: Creating image (snapshot of base box volume). 00:00:28.309 ==> default: Creating domain with the following settings... 00:00:28.309 ==> default: -- Name: ubuntu2404-24.04-1720510786-2314_default_1731904851_b016e4892c2066be732d 00:00:28.309 ==> default: -- Domain type: kvm 00:00:28.309 ==> default: -- Cpus: 10 00:00:28.309 ==> default: -- Feature: acpi 00:00:28.309 ==> default: -- Feature: apic 00:00:28.309 ==> default: -- Feature: pae 00:00:28.309 ==> default: -- Memory: 12288M 00:00:28.309 ==> default: -- Memory Backing: hugepages: 00:00:28.309 ==> default: -- Management MAC: 00:00:28.309 ==> default: -- Loader: 00:00:28.309 ==> default: -- Nvram: 00:00:28.309 ==> default: -- Base box: spdk/ubuntu2404 00:00:28.309 ==> default: -- Storage pool: default 00:00:28.309 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2404-24.04-1720510786-2314_default_1731904851_b016e4892c2066be732d.img (20G) 00:00:28.309 ==> default: -- Volume Cache: default 00:00:28.309 ==> default: -- Kernel: 00:00:28.309 ==> default: -- Initrd: 00:00:28.309 ==> default: -- Graphics Type: vnc 00:00:28.309 ==> default: -- Graphics Port: -1 00:00:28.309 ==> default: -- Graphics IP: 127.0.0.1 00:00:28.309 ==> default: -- Graphics Password: Not defined 00:00:28.309 ==> default: -- Video Type: cirrus 00:00:28.309 ==> default: -- Video VRAM: 9216 00:00:28.309 ==> default: -- Sound Type: 00:00:28.309 ==> default: -- Keymap: en-us 00:00:28.309 ==> default: -- TPM Path: 00:00:28.309 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:28.309 ==> default: -- Command line args: 00:00:28.309 ==> default: -> value=-device, 00:00:28.309 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:28.309 ==> default: -> value=-drive, 00:00:28.309 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:28.309 ==> default: -> value=-device, 00:00:28.309 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.309 ==> default: Creating shared folders metadata... 00:00:28.309 ==> default: Starting domain. 00:00:29.689 ==> default: Waiting for domain to get an IP address... 00:00:39.683 ==> default: Waiting for SSH to become available... 00:00:40.249 ==> default: Configuring and enabling network interfaces... 00:00:45.520 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:00:50.790 ==> default: Mounting SSHFS shared folder... 00:00:51.737 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output => /home/vagrant/spdk_repo/output 00:00:51.737 ==> default: Checking Mount.. 00:00:52.380 ==> default: Folder Successfully Mounted! 00:00:52.380 ==> default: Running provisioner: file... 00:00:52.639 default: ~/.gitconfig => .gitconfig 00:00:52.898 00:00:52.898 SUCCESS! 00:00:52.898 00:00:52.898 cd to /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt and type "vagrant ssh" to use. 00:00:52.898 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:00:52.898 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt" to destroy all trace of vm. 00:00:52.898 00:00:52.908 [Pipeline] } 00:00:52.922 [Pipeline] // stage 00:00:52.931 [Pipeline] dir 00:00:52.932 Running in /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt 00:00:52.933 [Pipeline] { 00:00:52.947 [Pipeline] catchError 00:00:52.950 [Pipeline] { 00:00:52.962 [Pipeline] sh 00:00:53.242 + vagrant ssh-config --host vagrant 00:00:53.242 + sed -ne /^Host/,$p 00:00:53.242 + tee ssh_conf 00:00:56.546 Host vagrant 00:00:56.546 HostName 192.168.121.252 00:00:56.546 User vagrant 00:00:56.546 Port 22 00:00:56.546 UserKnownHostsFile /dev/null 00:00:56.546 StrictHostKeyChecking no 00:00:56.546 PasswordAuthentication no 00:00:56.546 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2404/24.04-1720510786-2314/libvirt/ubuntu2404 00:00:56.546 IdentitiesOnly yes 00:00:56.546 LogLevel FATAL 00:00:56.546 ForwardAgent yes 00:00:56.546 ForwardX11 yes 00:00:56.546 00:00:56.561 [Pipeline] withEnv 00:00:56.563 [Pipeline] { 00:00:56.579 [Pipeline] sh 00:00:56.862 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:00:56.862 source /etc/os-release 00:00:56.862 [[ -e /image.version ]] && img=$(< /image.version) 00:00:56.862 # Minimal, systemd-like check. 00:00:56.862 if [[ -e /.dockerenv ]]; then 00:00:56.862 # Clear garbage from the node's name: 00:00:56.862 # agt-er_autotest_547-896 -> autotest_547-896 00:00:56.862 # $HOSTNAME is the actual container id 00:00:56.862 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:00:56.862 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:00:56.862 # We can assume this is a mount from a host where container is running, 00:00:56.862 # so fetch its hostname to easily identify the target swarm worker. 00:00:56.862 container="$(< /etc/hostname) ($agent)" 00:00:56.862 else 00:00:56.862 # Fallback 00:00:56.862 container=$agent 00:00:56.862 fi 00:00:56.862 fi 00:00:56.862 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:00:56.862 00:00:57.133 [Pipeline] } 00:00:57.152 [Pipeline] // withEnv 00:00:57.162 [Pipeline] setCustomBuildProperty 00:00:57.179 [Pipeline] stage 00:00:57.182 [Pipeline] { (Tests) 00:00:57.201 [Pipeline] sh 00:00:57.481 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:00:57.754 [Pipeline] sh 00:00:58.034 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:00:58.308 [Pipeline] timeout 00:00:58.309 Timeout set to expire in 1 hr 30 min 00:00:58.311 [Pipeline] { 00:00:58.326 [Pipeline] sh 00:00:58.611 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:00:59.179 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:00:59.191 [Pipeline] sh 00:00:59.472 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:00:59.740 [Pipeline] sh 00:01:00.041 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:00.312 [Pipeline] sh 00:01:00.586 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu24-vg-autotest ./autoruner.sh spdk_repo 00:01:00.844 ++ readlink -f spdk_repo 00:01:00.844 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:00.844 + [[ -n /home/vagrant/spdk_repo ]] 00:01:00.844 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:00.844 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:00.844 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:00.844 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:00.844 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:00.844 + [[ ubuntu24-vg-autotest == pkgdep-* ]] 00:01:00.844 + cd /home/vagrant/spdk_repo 00:01:00.844 + source /etc/os-release 00:01:00.844 ++ PRETTY_NAME='Ubuntu 24.04 LTS' 00:01:00.844 ++ NAME=Ubuntu 00:01:00.844 ++ VERSION_ID=24.04 00:01:00.844 ++ VERSION='24.04 LTS (Noble Numbat)' 00:01:00.844 ++ VERSION_CODENAME=noble 00:01:00.844 ++ ID=ubuntu 00:01:00.844 ++ ID_LIKE=debian 00:01:00.844 ++ HOME_URL=https://www.ubuntu.com/ 00:01:00.844 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:00.844 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:00.844 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:00.844 ++ UBUNTU_CODENAME=noble 00:01:00.844 ++ LOGO=ubuntu-logo 00:01:00.844 + uname -a 00:01:00.844 Linux ubuntu2404-cloud-1720510786-2314 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:00.844 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:00.844 Hugepages 00:01:00.844 node hugesize free / total 00:01:00.844 node0 1048576kB 0 / 0 00:01:00.844 node0 2048kB 0 / 0 00:01:00.844 00:01:00.844 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.104 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:01.104 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:01.104 + rm -f /tmp/spdk-ld-path 00:01:01.104 + source autorun-spdk.conf 00:01:01.104 ++ SPDK_TEST_UNITTEST=1 00:01:01.104 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.104 ++ SPDK_TEST_NVME=1 00:01:01.104 ++ SPDK_TEST_BLOCKDEV=1 00:01:01.104 ++ SPDK_RUN_ASAN=1 00:01:01.104 ++ SPDK_RUN_UBSAN=1 00:01:01.104 ++ SPDK_TEST_RAID5=1 00:01:01.104 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.104 ++ RUN_NIGHTLY=1 00:01:01.104 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.104 + [[ -n '' ]] 00:01:01.104 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:01.104 + for M in /var/spdk/build-*-manifest.txt 00:01:01.104 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.104 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:01.104 + for M in /var/spdk/build-*-manifest.txt 00:01:01.104 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.104 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:01.104 ++ uname 00:01:01.104 + [[ Linux == \L\i\n\u\x ]] 00:01:01.104 + sudo dmesg -T 00:01:01.104 + sudo dmesg --clear 00:01:01.104 + dmesg_pid=2378 00:01:01.104 + [[ Ubuntu == FreeBSD ]] 00:01:01.104 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.104 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.104 + sudo dmesg -Tw 00:01:01.104 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.104 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.104 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.104 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.104 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.104 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:01.104 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:01.104 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:01.104 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.104 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:01.104 Test configuration: 00:01:01.104 SPDK_TEST_UNITTEST=1 00:01:01.104 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.104 SPDK_TEST_NVME=1 00:01:01.104 SPDK_TEST_BLOCKDEV=1 00:01:01.104 SPDK_RUN_ASAN=1 00:01:01.104 SPDK_RUN_UBSAN=1 00:01:01.104 SPDK_TEST_RAID5=1 00:01:01.104 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.104 RUN_NIGHTLY=1 04:41:23 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:01.104 04:41:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:01.104 04:41:23 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.104 04:41:23 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.104 04:41:23 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.104 04:41:23 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:01.104 04:41:23 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:01.104 04:41:23 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:01.104 04:41:23 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:01.104 04:41:23 -- paths/export.sh@6 -- $ export PATH 00:01:01.104 04:41:23 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:01.104 04:41:23 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:01.104 04:41:23 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:01.104 04:41:23 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731904883.XXXXXX 00:01:01.104 04:41:23 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731904883.hd3zEd 00:01:01.104 04:41:23 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:01.104 04:41:23 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:01.104 04:41:23 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:01.104 04:41:23 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:01.104 04:41:23 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.104 04:41:23 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:01.104 04:41:23 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:01.104 04:41:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.104 04:41:23 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:01.104 04:41:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:01.104 04:41:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:01.104 04:41:23 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:01.104 04:41:23 -- spdk/autobuild.sh@16 -- $ date -u 00:01:01.104 Mon Nov 18 04:41:23 UTC 2024 00:01:01.104 04:41:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:01.364 LTS-67-gc13c99a5e 00:01:01.364 04:41:23 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:01.364 04:41:23 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:01.364 04:41:23 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:01.364 04:41:23 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:01.364 04:41:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.364 ************************************ 00:01:01.364 START TEST asan 00:01:01.364 ************************************ 00:01:01.364 using asan 00:01:01.364 04:41:23 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:01.364 00:01:01.364 real 0m0.000s 00:01:01.364 user 0m0.000s 00:01:01.364 sys 0m0.000s 00:01:01.364 04:41:23 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:01.364 04:41:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.364 ************************************ 00:01:01.364 END TEST asan 00:01:01.364 ************************************ 00:01:01.364 04:41:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:01.364 04:41:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:01.364 04:41:23 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:01.364 04:41:23 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:01.364 04:41:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.364 ************************************ 00:01:01.364 START TEST ubsan 00:01:01.364 ************************************ 00:01:01.364 using ubsan 00:01:01.364 04:41:24 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:01.364 00:01:01.364 real 0m0.000s 00:01:01.364 user 0m0.000s 00:01:01.364 sys 0m0.000s 00:01:01.364 ************************************ 00:01:01.364 END TEST ubsan 00:01:01.364 ************************************ 00:01:01.364 04:41:24 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:01.364 04:41:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.364 04:41:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:01.364 04:41:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:01.364 04:41:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:01.364 04:41:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:01.364 04:41:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:01.364 04:41:24 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:01.364 04:41:24 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:01.364 04:41:24 -- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build 00:01:01.364 04:41:24 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:01.364 04:41:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:01.364 04:41:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.364 ************************************ 00:01:01.364 START TEST unittest_build 00:01:01.364 ************************************ 00:01:01.364 04:41:24 -- common/autotest_common.sh@1114 -- $ _unittest_build 00:01:01.364 04:41:24 -- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --without-shared 00:01:01.364 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:01.364 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:01.932 Using 'verbs' RDMA provider 00:01:17.745 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:29.955 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:29.955 Creating mk/config.mk...done. 00:01:29.955 Creating mk/cc.flags.mk...done. 00:01:29.955 Type 'make' to build. 00:01:29.955 04:41:52 -- common/autobuild_common.sh@408 -- $ make -j10 00:01:29.955 make[1]: Nothing to be done for 'all'. 00:01:44.854 The Meson build system 00:01:44.854 Version: 1.4.1 00:01:44.854 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:44.854 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:44.854 Build type: native build 00:01:44.854 Program cat found: YES (/usr/bin/cat) 00:01:44.854 Project name: DPDK 00:01:44.854 Project version: 23.11.0 00:01:44.854 C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:01:44.854 C linker for the host machine: cc ld.bfd 2.42 00:01:44.854 Host machine cpu family: x86_64 00:01:44.854 Host machine cpu: x86_64 00:01:44.854 Message: ## Building in Developer Mode ## 00:01:44.854 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:44.854 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:44.854 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:44.854 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:01:44.854 Program cat found: YES (/usr/bin/cat) 00:01:44.854 Compiler for C supports arguments -march=native: YES 00:01:44.854 Checking for size of "void *" : 8 00:01:44.854 Checking for size of "void *" : 8 (cached) 00:01:44.854 Library m found: YES 00:01:44.854 Library numa found: YES 00:01:44.854 Has header "numaif.h" : YES 00:01:44.854 Library fdt found: NO 00:01:44.854 Library execinfo found: NO 00:01:44.854 Has header "execinfo.h" : YES 00:01:44.854 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:01:44.854 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:44.854 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:44.854 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:44.854 Run-time dependency openssl found: YES 3.0.13 00:01:44.854 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:44.854 Library pcap found: NO 00:01:44.854 Compiler for C supports arguments -Wcast-qual: YES 00:01:44.854 Compiler for C supports arguments -Wdeprecated: YES 00:01:44.854 Compiler for C supports arguments -Wformat: YES 00:01:44.854 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:44.854 Compiler for C supports arguments -Wformat-security: YES 00:01:44.854 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.854 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:44.854 Compiler for C supports arguments -Wnested-externs: YES 00:01:44.854 Compiler for C supports arguments -Wold-style-definition: YES 00:01:44.854 Compiler for C supports arguments -Wpointer-arith: YES 00:01:44.854 Compiler for C supports arguments -Wsign-compare: YES 00:01:44.854 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:44.854 Compiler for C supports arguments -Wundef: YES 00:01:44.854 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.854 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:44.854 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:44.854 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.854 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:44.854 Program objdump found: YES (/usr/bin/objdump) 00:01:44.854 Compiler for C supports arguments -mavx512f: YES 00:01:44.854 Checking if "AVX512 checking" compiles: YES 00:01:44.854 Fetching value of define "__SSE4_2__" : 1 00:01:44.854 Fetching value of define "__AES__" : 1 00:01:44.854 Fetching value of define "__AVX__" : 1 00:01:44.854 Fetching value of define "__AVX2__" : 1 00:01:44.854 Fetching value of define "__AVX512BW__" : (undefined) 00:01:44.854 Fetching value of define "__AVX512CD__" : (undefined) 00:01:44.854 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:44.854 Fetching value of define "__AVX512F__" : (undefined) 00:01:44.854 Fetching value of define "__AVX512VL__" : (undefined) 00:01:44.854 Fetching value of define "__PCLMUL__" : 1 00:01:44.854 Fetching value of define "__RDRND__" : 1 00:01:44.854 Fetching value of define "__RDSEED__" : 1 00:01:44.854 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:44.854 Fetching value of define "__znver1__" : (undefined) 00:01:44.854 Fetching value of define "__znver2__" : (undefined) 00:01:44.854 Fetching value of define "__znver3__" : (undefined) 00:01:44.854 Fetching value of define "__znver4__" : (undefined) 00:01:44.854 Library asan found: YES 00:01:44.854 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:44.854 Message: lib/log: Defining dependency "log" 00:01:44.854 Message: lib/kvargs: Defining dependency "kvargs" 00:01:44.854 Message: lib/telemetry: Defining dependency "telemetry" 00:01:44.854 Library rt found: YES 00:01:44.854 Checking for function "getentropy" : NO 00:01:44.854 Message: lib/eal: Defining dependency "eal" 00:01:44.854 Message: lib/ring: Defining dependency "ring" 00:01:44.854 Message: lib/rcu: Defining dependency "rcu" 00:01:44.854 Message: lib/mempool: Defining dependency "mempool" 00:01:44.854 Message: lib/mbuf: Defining dependency "mbuf" 00:01:44.854 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:44.854 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.854 Compiler for C supports arguments -mpclmul: YES 00:01:44.854 Compiler for C supports arguments -maes: YES 00:01:44.854 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.854 Compiler for C supports arguments -mavx512bw: YES 00:01:44.854 Compiler for C supports arguments -mavx512dq: YES 00:01:44.854 Compiler for C supports arguments -mavx512vl: YES 00:01:44.854 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:44.854 Compiler for C supports arguments -mavx2: YES 00:01:44.854 Compiler for C supports arguments -mavx: YES 00:01:44.854 Message: lib/net: Defining dependency "net" 00:01:44.854 Message: lib/meter: Defining dependency "meter" 00:01:44.854 Message: lib/ethdev: Defining dependency "ethdev" 00:01:44.854 Message: lib/pci: Defining dependency "pci" 00:01:44.854 Message: lib/cmdline: Defining dependency "cmdline" 00:01:44.854 Message: lib/hash: Defining dependency "hash" 00:01:44.854 Message: lib/timer: Defining dependency "timer" 00:01:44.854 Message: lib/compressdev: Defining dependency "compressdev" 00:01:44.854 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:44.854 Message: lib/dmadev: Defining dependency "dmadev" 00:01:44.854 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:44.854 Message: lib/power: Defining dependency "power" 00:01:44.854 Message: lib/reorder: Defining dependency "reorder" 00:01:44.854 Message: lib/security: Defining dependency "security" 00:01:44.854 Has header "linux/userfaultfd.h" : YES 00:01:44.854 Has header "linux/vduse.h" : YES 00:01:44.854 Message: lib/vhost: Defining dependency "vhost" 00:01:44.854 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:44.854 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:44.854 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:44.854 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:44.854 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:44.854 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:44.854 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:44.854 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:44.854 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:44.854 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:44.854 Program doxygen found: YES (/usr/bin/doxygen) 00:01:44.854 Configuring doxy-api-html.conf using configuration 00:01:44.854 Configuring doxy-api-man.conf using configuration 00:01:44.854 Program mandb found: YES (/usr/bin/mandb) 00:01:44.854 Program sphinx-build found: NO 00:01:44.854 Configuring rte_build_config.h using configuration 00:01:44.854 Message: 00:01:44.854 ================= 00:01:44.854 Applications Enabled 00:01:44.854 ================= 00:01:44.854 00:01:44.854 apps: 00:01:44.854 00:01:44.854 00:01:44.854 Message: 00:01:44.854 ================= 00:01:44.854 Libraries Enabled 00:01:44.854 ================= 00:01:44.854 00:01:44.854 libs: 00:01:44.854 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:44.854 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:44.854 cryptodev, dmadev, power, reorder, security, vhost, 00:01:44.854 00:01:44.854 Message: 00:01:44.854 =============== 00:01:44.854 Drivers Enabled 00:01:44.854 =============== 00:01:44.854 00:01:44.854 common: 00:01:44.854 00:01:44.854 bus: 00:01:44.854 pci, vdev, 00:01:44.854 mempool: 00:01:44.854 ring, 00:01:44.854 dma: 00:01:44.854 00:01:44.854 net: 00:01:44.854 00:01:44.854 crypto: 00:01:44.854 00:01:44.854 compress: 00:01:44.854 00:01:44.854 vdpa: 00:01:44.854 00:01:44.854 00:01:44.854 Message: 00:01:44.854 ================= 00:01:44.854 Content Skipped 00:01:44.854 ================= 00:01:44.854 00:01:44.854 apps: 00:01:44.854 dumpcap: explicitly disabled via build config 00:01:44.854 graph: explicitly disabled via build config 00:01:44.854 pdump: explicitly disabled via build config 00:01:44.854 proc-info: explicitly disabled via build config 00:01:44.854 test-acl: explicitly disabled via build config 00:01:44.854 test-bbdev: explicitly disabled via build config 00:01:44.854 test-cmdline: explicitly disabled via build config 00:01:44.854 test-compress-perf: explicitly disabled via build config 00:01:44.854 test-crypto-perf: explicitly disabled via build config 00:01:44.855 test-dma-perf: explicitly disabled via build config 00:01:44.855 test-eventdev: explicitly disabled via build config 00:01:44.855 test-fib: explicitly disabled via build config 00:01:44.855 test-flow-perf: explicitly disabled via build config 00:01:44.855 test-gpudev: explicitly disabled via build config 00:01:44.855 test-mldev: explicitly disabled via build config 00:01:44.855 test-pipeline: explicitly disabled via build config 00:01:44.855 test-pmd: explicitly disabled via build config 00:01:44.855 test-regex: explicitly disabled via build config 00:01:44.855 test-sad: explicitly disabled via build config 00:01:44.855 test-security-perf: explicitly disabled via build config 00:01:44.855 00:01:44.855 libs: 00:01:44.855 metrics: explicitly disabled via build config 00:01:44.855 acl: explicitly disabled via build config 00:01:44.855 bbdev: explicitly disabled via build config 00:01:44.855 bitratestats: explicitly disabled via build config 00:01:44.855 bpf: explicitly disabled via build config 00:01:44.855 cfgfile: explicitly disabled via build config 00:01:44.855 distributor: explicitly disabled via build config 00:01:44.855 efd: explicitly disabled via build config 00:01:44.855 eventdev: explicitly disabled via build config 00:01:44.855 dispatcher: explicitly disabled via build config 00:01:44.855 gpudev: explicitly disabled via build config 00:01:44.855 gro: explicitly disabled via build config 00:01:44.855 gso: explicitly disabled via build config 00:01:44.855 ip_frag: explicitly disabled via build config 00:01:44.855 jobstats: explicitly disabled via build config 00:01:44.855 latencystats: explicitly disabled via build config 00:01:44.855 lpm: explicitly disabled via build config 00:01:44.855 member: explicitly disabled via build config 00:01:44.855 pcapng: explicitly disabled via build config 00:01:44.855 rawdev: explicitly disabled via build config 00:01:44.855 regexdev: explicitly disabled via build config 00:01:44.855 mldev: explicitly disabled via build config 00:01:44.855 rib: explicitly disabled via build config 00:01:44.855 sched: explicitly disabled via build config 00:01:44.855 stack: explicitly disabled via build config 00:01:44.855 ipsec: explicitly disabled via build config 00:01:44.855 pdcp: explicitly disabled via build config 00:01:44.855 fib: explicitly disabled via build config 00:01:44.855 port: explicitly disabled via build config 00:01:44.855 pdump: explicitly disabled via build config 00:01:44.855 table: explicitly disabled via build config 00:01:44.855 pipeline: explicitly disabled via build config 00:01:44.855 graph: explicitly disabled via build config 00:01:44.855 node: explicitly disabled via build config 00:01:44.855 00:01:44.855 drivers: 00:01:44.855 common/cpt: not in enabled drivers build config 00:01:44.855 common/dpaax: not in enabled drivers build config 00:01:44.855 common/iavf: not in enabled drivers build config 00:01:44.855 common/idpf: not in enabled drivers build config 00:01:44.855 common/mvep: not in enabled drivers build config 00:01:44.855 common/octeontx: not in enabled drivers build config 00:01:44.855 bus/auxiliary: not in enabled drivers build config 00:01:44.855 bus/cdx: not in enabled drivers build config 00:01:44.855 bus/dpaa: not in enabled drivers build config 00:01:44.855 bus/fslmc: not in enabled drivers build config 00:01:44.855 bus/ifpga: not in enabled drivers build config 00:01:44.855 bus/platform: not in enabled drivers build config 00:01:44.855 bus/vmbus: not in enabled drivers build config 00:01:44.855 common/cnxk: not in enabled drivers build config 00:01:44.855 common/mlx5: not in enabled drivers build config 00:01:44.855 common/nfp: not in enabled drivers build config 00:01:44.855 common/qat: not in enabled drivers build config 00:01:44.855 common/sfc_efx: not in enabled drivers build config 00:01:44.855 mempool/bucket: not in enabled drivers build config 00:01:44.855 mempool/cnxk: not in enabled drivers build config 00:01:44.855 mempool/dpaa: not in enabled drivers build config 00:01:44.855 mempool/dpaa2: not in enabled drivers build config 00:01:44.855 mempool/octeontx: not in enabled drivers build config 00:01:44.855 mempool/stack: not in enabled drivers build config 00:01:44.855 dma/cnxk: not in enabled drivers build config 00:01:44.855 dma/dpaa: not in enabled drivers build config 00:01:44.855 dma/dpaa2: not in enabled drivers build config 00:01:44.855 dma/hisilicon: not in enabled drivers build config 00:01:44.855 dma/idxd: not in enabled drivers build config 00:01:44.855 dma/ioat: not in enabled drivers build config 00:01:44.855 dma/skeleton: not in enabled drivers build config 00:01:44.855 net/af_packet: not in enabled drivers build config 00:01:44.855 net/af_xdp: not in enabled drivers build config 00:01:44.855 net/ark: not in enabled drivers build config 00:01:44.855 net/atlantic: not in enabled drivers build config 00:01:44.855 net/avp: not in enabled drivers build config 00:01:44.855 net/axgbe: not in enabled drivers build config 00:01:44.855 net/bnx2x: not in enabled drivers build config 00:01:44.855 net/bnxt: not in enabled drivers build config 00:01:44.855 net/bonding: not in enabled drivers build config 00:01:44.855 net/cnxk: not in enabled drivers build config 00:01:44.855 net/cpfl: not in enabled drivers build config 00:01:44.855 net/cxgbe: not in enabled drivers build config 00:01:44.855 net/dpaa: not in enabled drivers build config 00:01:44.855 net/dpaa2: not in enabled drivers build config 00:01:44.855 net/e1000: not in enabled drivers build config 00:01:44.855 net/ena: not in enabled drivers build config 00:01:44.855 net/enetc: not in enabled drivers build config 00:01:44.855 net/enetfec: not in enabled drivers build config 00:01:44.855 net/enic: not in enabled drivers build config 00:01:44.855 net/failsafe: not in enabled drivers build config 00:01:44.855 net/fm10k: not in enabled drivers build config 00:01:44.855 net/gve: not in enabled drivers build config 00:01:44.855 net/hinic: not in enabled drivers build config 00:01:44.855 net/hns3: not in enabled drivers build config 00:01:44.855 net/i40e: not in enabled drivers build config 00:01:44.855 net/iavf: not in enabled drivers build config 00:01:44.855 net/ice: not in enabled drivers build config 00:01:44.855 net/idpf: not in enabled drivers build config 00:01:44.855 net/igc: not in enabled drivers build config 00:01:44.855 net/ionic: not in enabled drivers build config 00:01:44.855 net/ipn3ke: not in enabled drivers build config 00:01:44.855 net/ixgbe: not in enabled drivers build config 00:01:44.855 net/mana: not in enabled drivers build config 00:01:44.855 net/memif: not in enabled drivers build config 00:01:44.855 net/mlx4: not in enabled drivers build config 00:01:44.855 net/mlx5: not in enabled drivers build config 00:01:44.855 net/mvneta: not in enabled drivers build config 00:01:44.855 net/mvpp2: not in enabled drivers build config 00:01:44.855 net/netvsc: not in enabled drivers build config 00:01:44.855 net/nfb: not in enabled drivers build config 00:01:44.855 net/nfp: not in enabled drivers build config 00:01:44.855 net/ngbe: not in enabled drivers build config 00:01:44.855 net/null: not in enabled drivers build config 00:01:44.855 net/octeontx: not in enabled drivers build config 00:01:44.855 net/octeon_ep: not in enabled drivers build config 00:01:44.855 net/pcap: not in enabled drivers build config 00:01:44.855 net/pfe: not in enabled drivers build config 00:01:44.855 net/qede: not in enabled drivers build config 00:01:44.855 net/ring: not in enabled drivers build config 00:01:44.855 net/sfc: not in enabled drivers build config 00:01:44.855 net/softnic: not in enabled drivers build config 00:01:44.855 net/tap: not in enabled drivers build config 00:01:44.855 net/thunderx: not in enabled drivers build config 00:01:44.855 net/txgbe: not in enabled drivers build config 00:01:44.855 net/vdev_netvsc: not in enabled drivers build config 00:01:44.855 net/vhost: not in enabled drivers build config 00:01:44.855 net/virtio: not in enabled drivers build config 00:01:44.855 net/vmxnet3: not in enabled drivers build config 00:01:44.855 raw/*: missing internal dependency, "rawdev" 00:01:44.855 crypto/armv8: not in enabled drivers build config 00:01:44.855 crypto/bcmfs: not in enabled drivers build config 00:01:44.855 crypto/caam_jr: not in enabled drivers build config 00:01:44.855 crypto/ccp: not in enabled drivers build config 00:01:44.855 crypto/cnxk: not in enabled drivers build config 00:01:44.855 crypto/dpaa_sec: not in enabled drivers build config 00:01:44.855 crypto/dpaa2_sec: not in enabled drivers build config 00:01:44.855 crypto/ipsec_mb: not in enabled drivers build config 00:01:44.855 crypto/mlx5: not in enabled drivers build config 00:01:44.855 crypto/mvsam: not in enabled drivers build config 00:01:44.855 crypto/nitrox: not in enabled drivers build config 00:01:44.855 crypto/null: not in enabled drivers build config 00:01:44.855 crypto/octeontx: not in enabled drivers build config 00:01:44.855 crypto/openssl: not in enabled drivers build config 00:01:44.855 crypto/scheduler: not in enabled drivers build config 00:01:44.855 crypto/uadk: not in enabled drivers build config 00:01:44.855 crypto/virtio: not in enabled drivers build config 00:01:44.855 compress/isal: not in enabled drivers build config 00:01:44.855 compress/mlx5: not in enabled drivers build config 00:01:44.855 compress/octeontx: not in enabled drivers build config 00:01:44.855 compress/zlib: not in enabled drivers build config 00:01:44.855 regex/*: missing internal dependency, "regexdev" 00:01:44.855 ml/*: missing internal dependency, "mldev" 00:01:44.855 vdpa/ifc: not in enabled drivers build config 00:01:44.855 vdpa/mlx5: not in enabled drivers build config 00:01:44.855 vdpa/nfp: not in enabled drivers build config 00:01:44.855 vdpa/sfc: not in enabled drivers build config 00:01:44.855 event/*: missing internal dependency, "eventdev" 00:01:44.855 baseband/*: missing internal dependency, "bbdev" 00:01:44.855 gpu/*: missing internal dependency, "gpudev" 00:01:44.855 00:01:44.855 00:01:44.855 Build targets in project: 85 00:01:44.855 00:01:44.855 DPDK 23.11.0 00:01:44.855 00:01:44.855 User defined options 00:01:44.855 buildtype : debug 00:01:44.855 default_library : static 00:01:44.855 libdir : lib 00:01:44.855 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:44.855 b_sanitize : address 00:01:44.855 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:44.855 c_link_args : 00:01:44.855 cpu_instruction_set: native 00:01:44.856 disable_apps : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump 00:01:44.856 disable_libs : mldev,jobstats,bpf,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec 00:01:44.856 enable_docs : false 00:01:44.856 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:44.856 enable_kmods : false 00:01:44.856 tests : false 00:01:44.856 00:01:44.856 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:01:44.856 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:44.856 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.856 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:44.856 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:44.856 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.856 [5/265] Linking static target lib/librte_kvargs.a 00:01:44.856 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:44.856 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:44.856 [8/265] Linking static target lib/librte_log.a 00:01:44.856 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:44.856 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:44.856 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.856 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.856 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.856 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.856 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.856 [16/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.856 [17/265] Linking target lib/librte_log.so.24.0 00:01:44.856 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.856 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.856 [20/265] Linking static target lib/librte_telemetry.a 00:01:44.856 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.856 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.856 [23/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:44.856 [24/265] Linking target lib/librte_kvargs.so.24.0 00:01:44.856 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:44.856 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.856 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:44.856 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.856 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.856 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.856 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.856 [32/265] Linking target lib/librte_telemetry.so.24.0 00:01:44.856 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:44.856 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.856 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:44.856 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:44.856 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:44.856 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.115 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.115 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.115 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.115 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.115 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.115 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.115 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:45.372 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:45.372 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.373 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:45.630 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.630 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:45.630 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:45.630 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.630 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:45.630 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:45.887 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:45.887 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:45.887 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:45.887 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:45.887 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:45.887 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.145 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.145 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.145 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.145 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.403 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.404 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:46.404 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.404 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:46.404 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.662 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:46.662 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.662 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:46.662 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:46.662 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:46.662 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:46.662 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:46.662 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:46.921 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:46.921 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:46.921 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.179 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.179 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.179 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.179 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.179 [85/265] Linking static target lib/librte_ring.a 00:01:47.179 [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.179 [87/265] Linking static target lib/librte_eal.a 00:01:47.438 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.438 [89/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.438 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.438 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:47.438 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.438 [93/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:47.438 [94/265] Linking static target lib/librte_rcu.a 00:01:47.438 [95/265] Linking static target lib/librte_mempool.a 00:01:47.697 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.697 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.956 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.956 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.956 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.214 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.214 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.214 [103/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.214 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.492 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.492 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.492 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.492 [108/265] Linking static target lib/librte_net.a 00:01:48.492 [109/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.492 [110/265] Linking static target lib/librte_mbuf.a 00:01:48.492 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.492 [112/265] Linking static target lib/librte_meter.a 00:01:48.750 [113/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.750 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:48.750 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.750 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.009 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.009 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.009 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.268 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.526 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.526 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.526 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.784 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:49.784 [125/265] Linking static target lib/librte_pci.a 00:01:49.784 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.784 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:49.784 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:49.784 [129/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.042 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.042 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.042 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.042 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.042 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:50.042 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:50.042 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:50.042 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.042 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.042 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.042 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.042 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:50.300 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:50.558 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:50.558 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:50.558 [145/265] Linking static target lib/librte_cmdline.a 00:01:50.558 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:50.816 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:50.816 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:50.816 [149/265] Linking static target lib/librte_timer.a 00:01:50.816 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:51.074 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:51.074 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:51.074 [153/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.332 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:51.332 [155/265] Linking static target lib/librte_ethdev.a 00:01:51.332 [156/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.332 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:51.332 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.590 [159/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:51.590 [160/265] Linking static target lib/librte_compressdev.a 00:01:51.590 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:51.590 [162/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:51.590 [163/265] Linking static target lib/librte_hash.a 00:01:51.590 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.590 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:51.849 [166/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.849 [167/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:51.849 [168/265] Linking static target lib/librte_dmadev.a 00:01:51.849 [169/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.107 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.107 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.107 [172/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.107 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.365 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.365 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.365 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.623 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.623 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.623 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.881 [180/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.881 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.881 [182/265] Linking static target lib/librte_cryptodev.a 00:01:52.881 [183/265] Linking static target lib/librte_power.a 00:01:53.139 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.139 [185/265] Linking static target lib/librte_reorder.a 00:01:53.139 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.139 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.139 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:53.139 [189/265] Linking static target lib/librte_security.a 00:01:53.397 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.397 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.397 [192/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.655 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.655 [194/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.655 [195/265] Linking target lib/librte_eal.so.24.0 00:01:53.655 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.914 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.914 [198/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:53.914 [199/265] Linking target lib/librte_ring.so.24.0 00:01:53.914 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:54.172 [201/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:54.172 [202/265] Linking target lib/librte_meter.so.24.0 00:01:54.172 [203/265] Linking target lib/librte_rcu.so.24.0 00:01:54.172 [204/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.172 [205/265] Linking target lib/librte_mempool.so.24.0 00:01:54.172 [206/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.172 [207/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:54.172 [208/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:54.172 [209/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:54.172 [210/265] Linking target lib/librte_pci.so.24.0 00:01:54.172 [211/265] Linking target lib/librte_timer.so.24.0 00:01:54.172 [212/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:54.172 [213/265] Linking target lib/librte_dmadev.so.24.0 00:01:54.430 [214/265] Linking target lib/librte_mbuf.so.24.0 00:01:54.430 [215/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:54.430 [216/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:54.430 [217/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.431 [218/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:54.431 [219/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.431 [220/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:54.431 [221/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:54.431 [222/265] Linking target lib/librte_net.so.24.0 00:01:54.431 [223/265] Linking target lib/librte_compressdev.so.24.0 00:01:54.688 [224/265] Linking target lib/librte_cryptodev.so.24.0 00:01:54.689 [225/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:54.689 [226/265] Linking target lib/librte_cmdline.so.24.0 00:01:54.689 [227/265] Linking target lib/librte_hash.so.24.0 00:01:54.689 [228/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:54.689 [229/265] Linking target lib/librte_reorder.so.24.0 00:01:54.689 [230/265] Linking target lib/librte_security.so.24.0 00:01:54.689 [231/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:54.947 [232/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.947 [233/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.947 [234/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.947 [235/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.206 [236/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.206 [237/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.206 [238/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.206 [239/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.206 [240/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.206 [241/265] Linking static target drivers/librte_bus_vdev.a 00:01:55.465 [242/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:55.465 [243/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.465 [244/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.465 [245/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.465 [246/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.465 [247/265] Linking static target drivers/librte_bus_pci.a 00:01:55.465 [248/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.465 [249/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.465 [250/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.465 [251/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.465 [252/265] Linking static target drivers/librte_mempool_ring.a 00:01:55.723 [253/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:55.723 [254/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:55.981 [255/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.981 [256/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:56.550 [257/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.550 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:56.809 [259/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:56.809 [260/265] Linking target lib/librte_power.so.24.0 00:01:57.067 [261/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:00.366 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:00.366 [263/265] Linking static target lib/librte_vhost.a 00:02:02.271 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.529 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:02.529 INFO: autodetecting backend as ninja 00:02:02.529 INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:03.466 CC lib/log/log.o 00:02:03.466 CC lib/ut_mock/mock.o 00:02:03.466 CC lib/ut/ut.o 00:02:03.466 CC lib/log/log_deprecated.o 00:02:03.466 CC lib/log/log_flags.o 00:02:03.725 LIB libspdk_ut_mock.a 00:02:03.725 LIB libspdk_log.a 00:02:03.725 LIB libspdk_ut.a 00:02:03.984 CC lib/dma/dma.o 00:02:03.984 CC lib/ioat/ioat.o 00:02:03.984 CC lib/util/base64.o 00:02:03.984 CXX lib/trace_parser/trace.o 00:02:03.984 CC lib/util/cpuset.o 00:02:03.984 CC lib/util/bit_array.o 00:02:03.984 CC lib/util/crc32.o 00:02:03.984 CC lib/util/crc16.o 00:02:03.984 CC lib/util/crc32c.o 00:02:03.984 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.984 CC lib/util/crc32_ieee.o 00:02:03.984 CC lib/util/crc64.o 00:02:03.984 CC lib/util/dif.o 00:02:04.242 CC lib/util/fd.o 00:02:04.242 LIB libspdk_dma.a 00:02:04.242 CC lib/vfio_user/host/vfio_user.o 00:02:04.242 CC lib/util/file.o 00:02:04.242 CC lib/util/hexlify.o 00:02:04.242 CC lib/util/iov.o 00:02:04.242 CC lib/util/math.o 00:02:04.242 LIB libspdk_ioat.a 00:02:04.242 CC lib/util/pipe.o 00:02:04.242 CC lib/util/strerror_tls.o 00:02:04.242 CC lib/util/string.o 00:02:04.242 CC lib/util/uuid.o 00:02:04.501 CC lib/util/fd_group.o 00:02:04.501 LIB libspdk_vfio_user.a 00:02:04.501 CC lib/util/xor.o 00:02:04.501 CC lib/util/zipf.o 00:02:05.069 LIB libspdk_util.a 00:02:05.069 CC lib/json/json_parse.o 00:02:05.069 CC lib/json/json_util.o 00:02:05.069 CC lib/json/json_write.o 00:02:05.069 CC lib/conf/conf.o 00:02:05.069 CC lib/idxd/idxd.o 00:02:05.069 CC lib/idxd/idxd_user.o 00:02:05.069 CC lib/env_dpdk/env.o 00:02:05.069 CC lib/rdma/common.o 00:02:05.069 CC lib/vmd/vmd.o 00:02:05.327 LIB libspdk_trace_parser.a 00:02:05.327 CC lib/vmd/led.o 00:02:05.327 LIB libspdk_conf.a 00:02:05.327 CC lib/idxd/idxd_kernel.o 00:02:05.327 CC lib/env_dpdk/memory.o 00:02:05.327 CC lib/env_dpdk/pci.o 00:02:05.585 CC lib/rdma/rdma_verbs.o 00:02:05.585 CC lib/env_dpdk/init.o 00:02:05.585 CC lib/env_dpdk/threads.o 00:02:05.585 LIB libspdk_json.a 00:02:05.585 CC lib/env_dpdk/pci_ioat.o 00:02:05.585 CC lib/env_dpdk/pci_virtio.o 00:02:05.585 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.585 LIB libspdk_rdma.a 00:02:05.843 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.843 CC lib/env_dpdk/pci_vmd.o 00:02:05.843 CC lib/env_dpdk/pci_idxd.o 00:02:05.843 CC lib/env_dpdk/pci_event.o 00:02:05.843 CC lib/env_dpdk/sigbus_handler.o 00:02:05.843 CC lib/env_dpdk/pci_dpdk.o 00:02:05.843 LIB libspdk_idxd.a 00:02:05.843 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:05.843 CC lib/jsonrpc/jsonrpc_client.o 00:02:06.101 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.101 LIB libspdk_vmd.a 00:02:06.101 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.360 LIB libspdk_jsonrpc.a 00:02:06.360 CC lib/rpc/rpc.o 00:02:06.626 LIB libspdk_rpc.a 00:02:06.915 CC lib/trace/trace.o 00:02:06.915 CC lib/trace/trace_flags.o 00:02:06.915 CC lib/trace/trace_rpc.o 00:02:06.915 CC lib/sock/sock.o 00:02:06.915 CC lib/notify/notify.o 00:02:06.915 CC lib/sock/sock_rpc.o 00:02:06.915 CC lib/notify/notify_rpc.o 00:02:06.915 LIB libspdk_notify.a 00:02:07.185 LIB libspdk_env_dpdk.a 00:02:07.185 LIB libspdk_trace.a 00:02:07.185 CC lib/thread/thread.o 00:02:07.185 CC lib/thread/iobuf.o 00:02:07.185 LIB libspdk_sock.a 00:02:07.443 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.443 CC lib/nvme/nvme_ctrlr.o 00:02:07.443 CC lib/nvme/nvme_ns_cmd.o 00:02:07.443 CC lib/nvme/nvme_fabric.o 00:02:07.443 CC lib/nvme/nvme_pcie.o 00:02:07.443 CC lib/nvme/nvme_qpair.o 00:02:07.443 CC lib/nvme/nvme_ns.o 00:02:07.443 CC lib/nvme/nvme_pcie_common.o 00:02:07.701 CC lib/nvme/nvme.o 00:02:08.268 CC lib/nvme/nvme_quirks.o 00:02:08.268 CC lib/nvme/nvme_transport.o 00:02:08.526 CC lib/nvme/nvme_discovery.o 00:02:08.526 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.526 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.526 CC lib/nvme/nvme_tcp.o 00:02:08.785 CC lib/nvme/nvme_opal.o 00:02:08.785 CC lib/nvme/nvme_io_msg.o 00:02:09.043 CC lib/nvme/nvme_poll_group.o 00:02:09.043 CC lib/nvme/nvme_zns.o 00:02:09.043 CC lib/nvme/nvme_cuse.o 00:02:09.302 CC lib/nvme/nvme_vfio_user.o 00:02:09.302 LIB libspdk_thread.a 00:02:09.302 CC lib/nvme/nvme_rdma.o 00:02:09.561 CC lib/accel/accel.o 00:02:09.561 CC lib/blob/blobstore.o 00:02:09.561 CC lib/blob/request.o 00:02:09.819 CC lib/blob/zeroes.o 00:02:09.819 CC lib/blob/blob_bs_dev.o 00:02:09.819 CC lib/accel/accel_rpc.o 00:02:10.078 CC lib/init/json_config.o 00:02:10.078 CC lib/virtio/virtio.o 00:02:10.078 CC lib/init/subsystem.o 00:02:10.078 CC lib/accel/accel_sw.o 00:02:10.078 CC lib/init/subsystem_rpc.o 00:02:10.336 CC lib/init/rpc.o 00:02:10.336 CC lib/virtio/virtio_vhost_user.o 00:02:10.336 CC lib/virtio/virtio_vfio_user.o 00:02:10.336 CC lib/virtio/virtio_pci.o 00:02:10.595 LIB libspdk_init.a 00:02:10.595 CC lib/event/reactor.o 00:02:10.595 CC lib/event/app.o 00:02:10.595 CC lib/event/log_rpc.o 00:02:10.595 CC lib/event/app_rpc.o 00:02:10.595 CC lib/event/scheduler_static.o 00:02:10.854 LIB libspdk_virtio.a 00:02:10.854 LIB libspdk_accel.a 00:02:11.113 CC lib/bdev/bdev.o 00:02:11.113 CC lib/bdev/bdev_rpc.o 00:02:11.113 CC lib/bdev/part.o 00:02:11.113 CC lib/bdev/bdev_zone.o 00:02:11.113 CC lib/bdev/scsi_nvme.o 00:02:11.113 LIB libspdk_nvme.a 00:02:11.113 LIB libspdk_event.a 00:02:13.646 LIB libspdk_blob.a 00:02:13.905 CC lib/lvol/lvol.o 00:02:13.905 CC lib/blobfs/tree.o 00:02:13.905 CC lib/blobfs/blobfs.o 00:02:14.842 LIB libspdk_bdev.a 00:02:14.842 CC lib/nvmf/ctrlr.o 00:02:14.842 CC lib/nvmf/ctrlr_discovery.o 00:02:14.842 CC lib/nvmf/ctrlr_bdev.o 00:02:14.842 CC lib/nvmf/subsystem.o 00:02:14.842 CC lib/scsi/dev.o 00:02:14.842 CC lib/ftl/ftl_core.o 00:02:14.842 CC lib/nbd/nbd.o 00:02:14.842 CC lib/ublk/ublk.o 00:02:15.100 LIB libspdk_blobfs.a 00:02:15.100 CC lib/nvmf/nvmf.o 00:02:15.100 LIB libspdk_lvol.a 00:02:15.100 CC lib/nvmf/nvmf_rpc.o 00:02:15.359 CC lib/scsi/lun.o 00:02:15.359 CC lib/ftl/ftl_init.o 00:02:15.359 CC lib/nbd/nbd_rpc.o 00:02:15.617 CC lib/nvmf/transport.o 00:02:15.617 CC lib/ftl/ftl_layout.o 00:02:15.617 LIB libspdk_nbd.a 00:02:15.617 CC lib/nvmf/tcp.o 00:02:15.617 CC lib/scsi/port.o 00:02:15.617 CC lib/ublk/ublk_rpc.o 00:02:15.881 CC lib/nvmf/rdma.o 00:02:15.881 CC lib/scsi/scsi.o 00:02:15.881 LIB libspdk_ublk.a 00:02:15.881 CC lib/ftl/ftl_debug.o 00:02:15.881 CC lib/ftl/ftl_io.o 00:02:16.138 CC lib/scsi/scsi_bdev.o 00:02:16.138 CC lib/ftl/ftl_sb.o 00:02:16.138 CC lib/scsi/scsi_pr.o 00:02:16.138 CC lib/ftl/ftl_l2p.o 00:02:16.397 CC lib/ftl/ftl_l2p_flat.o 00:02:16.397 CC lib/ftl/ftl_nv_cache.o 00:02:16.397 CC lib/scsi/scsi_rpc.o 00:02:16.397 CC lib/ftl/ftl_band.o 00:02:16.397 CC lib/ftl/ftl_band_ops.o 00:02:16.656 CC lib/scsi/task.o 00:02:16.656 CC lib/ftl/ftl_writer.o 00:02:16.656 CC lib/ftl/ftl_rq.o 00:02:16.656 CC lib/ftl/ftl_reloc.o 00:02:16.914 LIB libspdk_scsi.a 00:02:16.914 CC lib/ftl/ftl_l2p_cache.o 00:02:16.914 CC lib/ftl/ftl_p2l.o 00:02:16.914 CC lib/ftl/mngt/ftl_mngt.o 00:02:16.914 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:16.914 CC lib/iscsi/conn.o 00:02:17.173 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:17.173 CC lib/iscsi/init_grp.o 00:02:17.173 CC lib/vhost/vhost.o 00:02:17.173 CC lib/vhost/vhost_rpc.o 00:02:17.432 CC lib/vhost/vhost_scsi.o 00:02:17.432 CC lib/vhost/vhost_blk.o 00:02:17.432 CC lib/iscsi/iscsi.o 00:02:17.690 CC lib/iscsi/md5.o 00:02:17.690 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:17.690 CC lib/iscsi/param.o 00:02:17.949 CC lib/iscsi/portal_grp.o 00:02:17.949 CC lib/iscsi/tgt_node.o 00:02:17.949 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:18.207 CC lib/vhost/rte_vhost_user.o 00:02:18.207 CC lib/iscsi/iscsi_subsystem.o 00:02:18.207 CC lib/iscsi/iscsi_rpc.o 00:02:18.207 CC lib/iscsi/task.o 00:02:18.466 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:18.466 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:18.466 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:18.466 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:18.466 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:18.725 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:18.725 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:18.725 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:18.725 CC lib/ftl/utils/ftl_conf.o 00:02:18.725 CC lib/ftl/utils/ftl_md.o 00:02:18.725 CC lib/ftl/utils/ftl_mempool.o 00:02:18.983 CC lib/ftl/utils/ftl_bitmap.o 00:02:18.983 CC lib/ftl/utils/ftl_property.o 00:02:18.983 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:18.983 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:18.983 LIB libspdk_nvmf.a 00:02:18.983 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:18.983 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:18.983 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:19.241 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:19.241 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:19.241 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:19.241 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:19.241 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:19.241 CC lib/ftl/base/ftl_base_dev.o 00:02:19.241 CC lib/ftl/base/ftl_base_bdev.o 00:02:19.241 CC lib/ftl/ftl_trace.o 00:02:19.500 LIB libspdk_vhost.a 00:02:19.500 LIB libspdk_iscsi.a 00:02:19.758 LIB libspdk_ftl.a 00:02:20.017 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.017 CC module/blob/bdev/blob_bdev.o 00:02:20.017 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.017 CC module/accel/error/accel_error.o 00:02:20.017 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.017 CC module/accel/ioat/accel_ioat.o 00:02:20.017 CC module/sock/posix/posix.o 00:02:20.017 CC module/accel/iaa/accel_iaa.o 00:02:20.017 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.017 CC module/accel/dsa/accel_dsa.o 00:02:20.017 LIB libspdk_env_dpdk_rpc.a 00:02:20.275 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.275 LIB libspdk_scheduler_gscheduler.a 00:02:20.275 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.275 CC module/accel/error/accel_error_rpc.o 00:02:20.275 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.275 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.275 LIB libspdk_scheduler_dynamic.a 00:02:20.275 LIB libspdk_accel_dsa.a 00:02:20.275 LIB libspdk_blob_bdev.a 00:02:20.275 LIB libspdk_accel_ioat.a 00:02:20.275 LIB libspdk_accel_iaa.a 00:02:20.275 LIB libspdk_accel_error.a 00:02:20.533 CC module/bdev/malloc/bdev_malloc.o 00:02:20.533 CC module/bdev/delay/vbdev_delay.o 00:02:20.533 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.533 CC module/bdev/error/vbdev_error.o 00:02:20.533 CC module/bdev/null/bdev_null.o 00:02:20.533 CC module/bdev/nvme/bdev_nvme.o 00:02:20.533 CC module/bdev/gpt/gpt.o 00:02:20.533 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.533 CC module/bdev/passthru/vbdev_passthru.o 00:02:20.792 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:20.792 CC module/bdev/gpt/vbdev_gpt.o 00:02:20.792 CC module/bdev/null/bdev_null_rpc.o 00:02:20.792 CC module/bdev/error/vbdev_error_rpc.o 00:02:20.792 LIB libspdk_blobfs_bdev.a 00:02:20.792 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:21.050 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:21.050 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.050 LIB libspdk_sock_posix.a 00:02:21.050 LIB libspdk_bdev_error.a 00:02:21.050 CC module/bdev/raid/bdev_raid.o 00:02:21.050 LIB libspdk_bdev_null.a 00:02:21.050 CC module/bdev/raid/bdev_raid_rpc.o 00:02:21.050 LIB libspdk_bdev_gpt.a 00:02:21.050 LIB libspdk_bdev_passthru.a 00:02:21.050 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:21.050 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:21.050 CC module/bdev/split/vbdev_split.o 00:02:21.050 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:21.308 LIB libspdk_bdev_delay.a 00:02:21.308 LIB libspdk_bdev_malloc.a 00:02:21.308 CC module/bdev/aio/bdev_aio.o 00:02:21.308 CC module/bdev/aio/bdev_aio_rpc.o 00:02:21.308 CC module/bdev/nvme/nvme_rpc.o 00:02:21.308 CC module/bdev/nvme/bdev_mdns_client.o 00:02:21.308 CC module/bdev/nvme/vbdev_opal.o 00:02:21.566 CC module/bdev/split/vbdev_split_rpc.o 00:02:21.566 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:21.566 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:21.566 LIB libspdk_bdev_lvol.a 00:02:21.566 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:21.566 LIB libspdk_bdev_split.a 00:02:21.566 LIB libspdk_bdev_aio.a 00:02:21.566 CC module/bdev/ftl/bdev_ftl.o 00:02:21.825 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.825 CC module/bdev/raid/bdev_raid_sb.o 00:02:21.825 CC module/bdev/iscsi/bdev_iscsi.o 00:02:21.825 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:21.825 LIB libspdk_bdev_zone_block.a 00:02:21.825 CC module/bdev/raid/raid0.o 00:02:21.825 CC module/bdev/raid/raid1.o 00:02:22.083 CC module/bdev/raid/concat.o 00:02:22.083 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:22.083 CC module/bdev/raid/raid5f.o 00:02:22.083 LIB libspdk_bdev_ftl.a 00:02:22.083 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:22.083 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:22.342 LIB libspdk_bdev_iscsi.a 00:02:22.600 LIB libspdk_bdev_virtio.a 00:02:22.600 LIB libspdk_bdev_raid.a 00:02:23.534 LIB libspdk_bdev_nvme.a 00:02:23.793 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:23.793 CC module/event/subsystems/vmd/vmd.o 00:02:23.793 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:23.793 CC module/event/subsystems/scheduler/scheduler.o 00:02:23.793 CC module/event/subsystems/iobuf/iobuf.o 00:02:23.793 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:23.793 CC module/event/subsystems/sock/sock.o 00:02:24.051 LIB libspdk_event_vhost_blk.a 00:02:24.051 LIB libspdk_event_sock.a 00:02:24.051 LIB libspdk_event_scheduler.a 00:02:24.051 LIB libspdk_event_vmd.a 00:02:24.051 LIB libspdk_event_iobuf.a 00:02:24.309 CC module/event/subsystems/accel/accel.o 00:02:24.569 LIB libspdk_event_accel.a 00:02:24.569 CC module/event/subsystems/bdev/bdev.o 00:02:24.837 LIB libspdk_event_bdev.a 00:02:25.096 CC module/event/subsystems/scsi/scsi.o 00:02:25.096 CC module/event/subsystems/ublk/ublk.o 00:02:25.096 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:25.096 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:25.096 CC module/event/subsystems/nbd/nbd.o 00:02:25.096 LIB libspdk_event_nbd.a 00:02:25.096 LIB libspdk_event_ublk.a 00:02:25.096 LIB libspdk_event_scsi.a 00:02:25.355 LIB libspdk_event_nvmf.a 00:02:25.355 CC module/event/subsystems/iscsi/iscsi.o 00:02:25.355 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:25.613 LIB libspdk_event_vhost_scsi.a 00:02:25.613 LIB libspdk_event_iscsi.a 00:02:25.613 CC app/trace_record/trace_record.o 00:02:25.613 CXX app/trace/trace.o 00:02:25.613 CC app/spdk_lspci/spdk_lspci.o 00:02:25.613 CC app/spdk_nvme_perf/perf.o 00:02:25.870 CC app/iscsi_tgt/iscsi_tgt.o 00:02:25.870 CC app/nvmf_tgt/nvmf_main.o 00:02:25.870 CC examples/accel/perf/accel_perf.o 00:02:25.870 CC app/spdk_tgt/spdk_tgt.o 00:02:25.870 CC examples/bdev/hello_world/hello_bdev.o 00:02:25.870 CC test/accel/dif/dif.o 00:02:25.870 LINK spdk_lspci 00:02:25.870 LINK nvmf_tgt 00:02:25.870 LINK spdk_trace_record 00:02:26.128 LINK iscsi_tgt 00:02:26.128 LINK spdk_tgt 00:02:26.128 LINK hello_bdev 00:02:26.128 LINK spdk_trace 00:02:26.386 LINK dif 00:02:26.386 LINK accel_perf 00:02:26.645 CC app/spdk_nvme_identify/identify.o 00:02:26.645 CC examples/bdev/bdevperf/bdevperf.o 00:02:26.903 CC test/app/bdev_svc/bdev_svc.o 00:02:26.903 LINK spdk_nvme_perf 00:02:27.160 LINK bdev_svc 00:02:27.160 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:27.726 LINK bdevperf 00:02:27.726 LINK spdk_nvme_identify 00:02:27.726 LINK nvme_fuzz 00:02:27.726 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:27.984 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:27.984 CC app/spdk_nvme_discover/discovery_aer.o 00:02:27.984 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:28.242 CC test/bdev/bdevio/bdevio.o 00:02:28.242 CC examples/blob/hello_world/hello_blob.o 00:02:28.242 LINK spdk_nvme_discover 00:02:28.242 CC app/spdk_top/spdk_top.o 00:02:28.500 LINK hello_blob 00:02:28.500 CC test/app/histogram_perf/histogram_perf.o 00:02:28.500 LINK vhost_fuzz 00:02:28.500 CC test/app/jsoncat/jsoncat.o 00:02:28.759 LINK bdevio 00:02:28.759 LINK histogram_perf 00:02:28.759 LINK jsoncat 00:02:29.017 CC test/blobfs/mkfs/mkfs.o 00:02:29.017 TEST_HEADER include/spdk/accel.h 00:02:29.017 TEST_HEADER include/spdk/accel_module.h 00:02:29.017 TEST_HEADER include/spdk/assert.h 00:02:29.017 TEST_HEADER include/spdk/barrier.h 00:02:29.017 TEST_HEADER include/spdk/base64.h 00:02:29.017 TEST_HEADER include/spdk/bdev.h 00:02:29.017 TEST_HEADER include/spdk/bdev_module.h 00:02:29.017 TEST_HEADER include/spdk/bdev_zone.h 00:02:29.017 TEST_HEADER include/spdk/bit_array.h 00:02:29.017 TEST_HEADER include/spdk/bit_pool.h 00:02:29.017 TEST_HEADER include/spdk/blob.h 00:02:29.017 TEST_HEADER include/spdk/blob_bdev.h 00:02:29.018 TEST_HEADER include/spdk/blobfs.h 00:02:29.018 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:29.018 TEST_HEADER include/spdk/conf.h 00:02:29.018 TEST_HEADER include/spdk/config.h 00:02:29.018 TEST_HEADER include/spdk/cpuset.h 00:02:29.018 TEST_HEADER include/spdk/crc16.h 00:02:29.018 TEST_HEADER include/spdk/crc32.h 00:02:29.018 TEST_HEADER include/spdk/crc64.h 00:02:29.018 TEST_HEADER include/spdk/dif.h 00:02:29.018 TEST_HEADER include/spdk/dma.h 00:02:29.018 TEST_HEADER include/spdk/endian.h 00:02:29.018 TEST_HEADER include/spdk/env.h 00:02:29.018 TEST_HEADER include/spdk/env_dpdk.h 00:02:29.018 TEST_HEADER include/spdk/event.h 00:02:29.018 TEST_HEADER include/spdk/fd.h 00:02:29.018 TEST_HEADER include/spdk/fd_group.h 00:02:29.018 TEST_HEADER include/spdk/file.h 00:02:29.018 TEST_HEADER include/spdk/ftl.h 00:02:29.018 TEST_HEADER include/spdk/gpt_spec.h 00:02:29.018 TEST_HEADER include/spdk/hexlify.h 00:02:29.018 TEST_HEADER include/spdk/histogram_data.h 00:02:29.018 TEST_HEADER include/spdk/idxd.h 00:02:29.018 TEST_HEADER include/spdk/idxd_spec.h 00:02:29.018 TEST_HEADER include/spdk/init.h 00:02:29.018 TEST_HEADER include/spdk/ioat.h 00:02:29.018 TEST_HEADER include/spdk/ioat_spec.h 00:02:29.018 TEST_HEADER include/spdk/iscsi_spec.h 00:02:29.018 TEST_HEADER include/spdk/json.h 00:02:29.276 TEST_HEADER include/spdk/jsonrpc.h 00:02:29.276 TEST_HEADER include/spdk/likely.h 00:02:29.276 TEST_HEADER include/spdk/log.h 00:02:29.276 TEST_HEADER include/spdk/lvol.h 00:02:29.276 TEST_HEADER include/spdk/memory.h 00:02:29.276 TEST_HEADER include/spdk/mmio.h 00:02:29.276 TEST_HEADER include/spdk/nbd.h 00:02:29.276 TEST_HEADER include/spdk/notify.h 00:02:29.276 TEST_HEADER include/spdk/nvme.h 00:02:29.276 TEST_HEADER include/spdk/nvme_intel.h 00:02:29.276 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:29.276 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:29.276 TEST_HEADER include/spdk/nvme_spec.h 00:02:29.276 TEST_HEADER include/spdk/nvme_zns.h 00:02:29.276 LINK mkfs 00:02:29.276 TEST_HEADER include/spdk/nvmf.h 00:02:29.276 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:29.276 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:29.276 TEST_HEADER include/spdk/nvmf_spec.h 00:02:29.276 TEST_HEADER include/spdk/nvmf_transport.h 00:02:29.276 TEST_HEADER include/spdk/opal.h 00:02:29.276 TEST_HEADER include/spdk/opal_spec.h 00:02:29.276 TEST_HEADER include/spdk/pci_ids.h 00:02:29.276 TEST_HEADER include/spdk/pipe.h 00:02:29.276 TEST_HEADER include/spdk/queue.h 00:02:29.276 TEST_HEADER include/spdk/reduce.h 00:02:29.276 TEST_HEADER include/spdk/rpc.h 00:02:29.276 TEST_HEADER include/spdk/scheduler.h 00:02:29.276 TEST_HEADER include/spdk/scsi.h 00:02:29.276 TEST_HEADER include/spdk/scsi_spec.h 00:02:29.276 TEST_HEADER include/spdk/sock.h 00:02:29.276 TEST_HEADER include/spdk/stdinc.h 00:02:29.277 TEST_HEADER include/spdk/string.h 00:02:29.277 TEST_HEADER include/spdk/thread.h 00:02:29.277 TEST_HEADER include/spdk/trace.h 00:02:29.277 TEST_HEADER include/spdk/trace_parser.h 00:02:29.277 TEST_HEADER include/spdk/tree.h 00:02:29.277 TEST_HEADER include/spdk/ublk.h 00:02:29.277 TEST_HEADER include/spdk/util.h 00:02:29.277 TEST_HEADER include/spdk/uuid.h 00:02:29.277 TEST_HEADER include/spdk/version.h 00:02:29.277 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:29.277 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:29.277 TEST_HEADER include/spdk/vhost.h 00:02:29.277 TEST_HEADER include/spdk/vmd.h 00:02:29.277 TEST_HEADER include/spdk/xor.h 00:02:29.277 TEST_HEADER include/spdk/zipf.h 00:02:29.277 CXX test/cpp_headers/accel.o 00:02:29.277 CC test/dma/test_dma/test_dma.o 00:02:29.535 CC test/env/mem_callbacks/mem_callbacks.o 00:02:29.535 CXX test/cpp_headers/accel_module.o 00:02:29.535 CC test/event/event_perf/event_perf.o 00:02:29.535 CC app/vhost/vhost.o 00:02:29.535 CXX test/cpp_headers/assert.o 00:02:29.535 LINK spdk_top 00:02:29.535 LINK event_perf 00:02:29.793 LINK vhost 00:02:29.794 CXX test/cpp_headers/barrier.o 00:02:29.794 LINK test_dma 00:02:30.052 CXX test/cpp_headers/base64.o 00:02:30.052 LINK mem_callbacks 00:02:30.310 CXX test/cpp_headers/bdev.o 00:02:30.310 LINK iscsi_fuzz 00:02:30.310 CC test/app/stub/stub.o 00:02:30.310 CC test/event/reactor/reactor.o 00:02:30.310 CC examples/blob/cli/blobcli.o 00:02:30.310 CXX test/cpp_headers/bdev_module.o 00:02:30.569 LINK reactor 00:02:30.569 LINK stub 00:02:30.569 CC test/env/vtophys/vtophys.o 00:02:30.569 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:30.827 CXX test/cpp_headers/bdev_zone.o 00:02:30.827 LINK vtophys 00:02:31.085 LINK env_dpdk_post_init 00:02:31.085 LINK blobcli 00:02:31.085 CXX test/cpp_headers/bit_array.o 00:02:31.085 CC test/event/reactor_perf/reactor_perf.o 00:02:31.085 CC app/spdk_dd/spdk_dd.o 00:02:31.343 CC app/fio/nvme/fio_plugin.o 00:02:31.343 CXX test/cpp_headers/bit_pool.o 00:02:31.343 LINK reactor_perf 00:02:31.602 CC test/env/memory/memory_ut.o 00:02:31.602 CXX test/cpp_headers/blob.o 00:02:31.602 CC test/lvol/esnap/esnap.o 00:02:31.602 LINK spdk_dd 00:02:31.602 CC test/event/app_repeat/app_repeat.o 00:02:31.602 CC app/fio/bdev/fio_plugin.o 00:02:31.860 CXX test/cpp_headers/blob_bdev.o 00:02:31.860 LINK app_repeat 00:02:31.860 CC test/env/pci/pci_ut.o 00:02:31.860 CXX test/cpp_headers/blobfs.o 00:02:32.118 CC test/nvme/aer/aer.o 00:02:32.118 LINK spdk_nvme 00:02:32.118 CXX test/cpp_headers/blobfs_bdev.o 00:02:32.376 LINK spdk_bdev 00:02:32.376 LINK aer 00:02:32.376 CXX test/cpp_headers/conf.o 00:02:32.376 LINK pci_ut 00:02:32.634 CXX test/cpp_headers/config.o 00:02:32.634 LINK memory_ut 00:02:32.634 CXX test/cpp_headers/cpuset.o 00:02:32.892 CC test/event/scheduler/scheduler.o 00:02:32.892 CXX test/cpp_headers/crc16.o 00:02:32.892 CC examples/ioat/perf/perf.o 00:02:32.892 CC examples/ioat/verify/verify.o 00:02:32.892 CC test/rpc_client/rpc_client_test.o 00:02:32.892 CC examples/nvme/hello_world/hello_world.o 00:02:33.150 CXX test/cpp_headers/crc32.o 00:02:33.150 LINK ioat_perf 00:02:33.150 LINK verify 00:02:33.150 LINK scheduler 00:02:33.150 LINK rpc_client_test 00:02:33.150 LINK hello_world 00:02:33.150 CC test/nvme/reset/reset.o 00:02:33.408 CXX test/cpp_headers/crc64.o 00:02:33.408 CXX test/cpp_headers/dif.o 00:02:33.408 CC examples/nvme/reconnect/reconnect.o 00:02:33.667 LINK reset 00:02:33.667 CXX test/cpp_headers/dma.o 00:02:33.667 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:33.667 CC test/nvme/e2edp/nvme_dp.o 00:02:33.667 CC test/nvme/sgl/sgl.o 00:02:34.231 CXX test/cpp_headers/endian.o 00:02:34.231 CXX test/cpp_headers/env.o 00:02:34.231 LINK sgl 00:02:34.231 LINK nvme_dp 00:02:34.231 LINK reconnect 00:02:34.231 CC test/nvme/overhead/overhead.o 00:02:34.231 CXX test/cpp_headers/env_dpdk.o 00:02:34.231 CC test/nvme/err_injection/err_injection.o 00:02:34.490 LINK nvme_manage 00:02:34.490 CXX test/cpp_headers/event.o 00:02:34.490 LINK overhead 00:02:34.490 CC examples/nvme/arbitration/arbitration.o 00:02:34.490 LINK err_injection 00:02:34.748 CXX test/cpp_headers/fd.o 00:02:34.748 CXX test/cpp_headers/fd_group.o 00:02:35.005 CXX test/cpp_headers/file.o 00:02:35.005 CXX test/cpp_headers/ftl.o 00:02:35.005 LINK arbitration 00:02:35.005 CC test/thread/poller_perf/poller_perf.o 00:02:35.263 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:35.263 CC test/nvme/startup/startup.o 00:02:35.263 LINK poller_perf 00:02:35.263 CXX test/cpp_headers/gpt_spec.o 00:02:35.263 CC test/nvme/reserve/reserve.o 00:02:35.263 CC test/nvme/simple_copy/simple_copy.o 00:02:35.521 CC test/nvme/connect_stress/connect_stress.o 00:02:35.521 LINK histogram_ut 00:02:35.521 CC examples/sock/hello_world/hello_sock.o 00:02:35.521 LINK startup 00:02:35.521 CXX test/cpp_headers/hexlify.o 00:02:35.781 LINK reserve 00:02:35.781 LINK simple_copy 00:02:35.781 LINK connect_stress 00:02:35.781 CXX test/cpp_headers/histogram_data.o 00:02:35.781 CC examples/nvme/hotplug/hotplug.o 00:02:36.040 CC test/unit/lib/accel/accel.c/accel_ut.o 00:02:36.040 LINK hello_sock 00:02:36.040 CC test/thread/lock/spdk_lock.o 00:02:36.040 CXX test/cpp_headers/idxd.o 00:02:36.298 LINK hotplug 00:02:36.298 CXX test/cpp_headers/idxd_spec.o 00:02:36.298 CXX test/cpp_headers/init.o 00:02:36.557 CC examples/vmd/lsvmd/lsvmd.o 00:02:36.557 CC examples/vmd/led/led.o 00:02:36.557 CXX test/cpp_headers/ioat.o 00:02:36.557 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:36.557 CC test/nvme/boot_partition/boot_partition.o 00:02:36.557 CC examples/nvme/abort/abort.o 00:02:36.557 LINK lsvmd 00:02:36.815 LINK led 00:02:36.815 LINK boot_partition 00:02:36.815 LINK cmb_copy 00:02:36.815 CXX test/cpp_headers/ioat_spec.o 00:02:37.073 CXX test/cpp_headers/iscsi_spec.o 00:02:37.073 LINK abort 00:02:37.073 CC examples/nvmf/nvmf/nvmf.o 00:02:37.073 CXX test/cpp_headers/json.o 00:02:37.331 CXX test/cpp_headers/jsonrpc.o 00:02:37.589 CXX test/cpp_headers/likely.o 00:02:37.589 LINK nvmf 00:02:37.589 CXX test/cpp_headers/log.o 00:02:37.589 CC test/nvme/compliance/nvme_compliance.o 00:02:37.589 CC examples/util/zipf/zipf.o 00:02:37.589 CC examples/thread/thread/thread_ex.o 00:02:37.847 CXX test/cpp_headers/lvol.o 00:02:37.847 CC examples/idxd/perf/perf.o 00:02:37.847 LINK zipf 00:02:37.847 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:38.112 LINK thread 00:02:38.112 CXX test/cpp_headers/memory.o 00:02:38.112 LINK nvme_compliance 00:02:38.112 LINK pmr_persistence 00:02:38.112 CXX test/cpp_headers/mmio.o 00:02:38.112 LINK spdk_lock 00:02:38.112 LINK idxd_perf 00:02:38.379 CXX test/cpp_headers/nbd.o 00:02:38.379 CXX test/cpp_headers/notify.o 00:02:38.379 LINK esnap 00:02:38.379 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:02:38.637 CXX test/cpp_headers/nvme.o 00:02:38.637 CXX test/cpp_headers/nvme_intel.o 00:02:38.895 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:02:38.895 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:02:38.895 CXX test/cpp_headers/nvme_ocssd.o 00:02:38.895 CC test/nvme/fused_ordering/fused_ordering.o 00:02:38.895 LINK accel_ut 00:02:38.895 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:38.895 CC test/nvme/fdp/fdp.o 00:02:39.153 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:39.153 LINK tree_ut 00:02:39.153 LINK fused_ordering 00:02:39.153 LINK doorbell_aers 00:02:39.411 CC test/nvme/cuse/cuse.o 00:02:39.411 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:02:39.411 CXX test/cpp_headers/nvme_spec.o 00:02:39.411 LINK fdp 00:02:39.411 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:02:39.669 CXX test/cpp_headers/nvme_zns.o 00:02:39.669 LINK blob_bdev_ut 00:02:39.669 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:39.927 CXX test/cpp_headers/nvmf.o 00:02:39.927 CC test/unit/lib/blob/blob.c/blob_ut.o 00:02:39.927 CXX test/cpp_headers/nvmf_cmd.o 00:02:39.927 LINK interrupt_tgt 00:02:40.186 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:02:40.186 CC test/unit/lib/bdev/part.c/part_ut.o 00:02:40.186 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:40.186 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:02:40.443 LINK blobfs_bdev_ut 00:02:40.443 CXX test/cpp_headers/nvmf_spec.o 00:02:40.701 LINK cuse 00:02:40.702 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:02:40.702 LINK scsi_nvme_ut 00:02:40.959 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:02:40.959 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:02:41.217 CXX test/cpp_headers/nvmf_transport.o 00:02:41.217 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:02:41.217 LINK gpt_ut 00:02:41.217 LINK blobfs_async_ut 00:02:41.217 LINK blobfs_sync_ut 00:02:41.475 CXX test/cpp_headers/opal.o 00:02:41.475 CC test/unit/lib/dma/dma.c/dma_ut.o 00:02:41.733 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:02:41.733 CC test/unit/lib/event/app.c/app_ut.o 00:02:41.991 CXX test/cpp_headers/opal_spec.o 00:02:41.991 LINK dma_ut 00:02:41.991 CXX test/cpp_headers/pci_ids.o 00:02:42.249 LINK bdev_zone_ut 00:02:42.249 CXX test/cpp_headers/pipe.o 00:02:42.249 LINK vbdev_lvol_ut 00:02:42.249 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:02:42.508 CXX test/cpp_headers/queue.o 00:02:42.508 CXX test/cpp_headers/reduce.o 00:02:42.508 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:02:42.508 LINK app_ut 00:02:42.508 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:02:42.766 CXX test/cpp_headers/rpc.o 00:02:42.766 LINK ioat_ut 00:02:42.766 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:02:43.024 CXX test/cpp_headers/scheduler.o 00:02:43.024 CXX test/cpp_headers/scsi.o 00:02:43.282 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:02:43.282 CXX test/cpp_headers/scsi_spec.o 00:02:43.282 LINK vbdev_zone_block_ut 00:02:43.540 CXX test/cpp_headers/sock.o 00:02:43.540 CXX test/cpp_headers/stdinc.o 00:02:43.540 CXX test/cpp_headers/string.o 00:02:43.798 LINK bdev_raid_ut 00:02:43.798 CXX test/cpp_headers/thread.o 00:02:43.798 LINK reactor_ut 00:02:43.798 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:02:44.056 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:02:44.056 CXX test/cpp_headers/trace.o 00:02:44.056 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:02:44.313 CXX test/cpp_headers/trace_parser.o 00:02:44.313 CXX test/cpp_headers/tree.o 00:02:44.572 CXX test/cpp_headers/ublk.o 00:02:44.572 LINK conn_ut 00:02:44.572 LINK part_ut 00:02:44.572 LINK bdev_raid_sb_ut 00:02:44.572 LINK jsonrpc_server_ut 00:02:44.572 CXX test/cpp_headers/util.o 00:02:44.830 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:02:44.830 CXX test/cpp_headers/uuid.o 00:02:44.830 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:02:44.830 CC test/unit/lib/log/log.c/log_ut.o 00:02:45.088 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:02:45.088 CXX test/cpp_headers/version.o 00:02:45.088 CXX test/cpp_headers/vfio_user_pci.o 00:02:45.088 LINK bdev_ut 00:02:45.346 LINK log_ut 00:02:45.346 CXX test/cpp_headers/vfio_user_spec.o 00:02:45.346 LINK init_grp_ut 00:02:45.346 CXX test/cpp_headers/vhost.o 00:02:45.604 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:02:45.604 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:02:45.604 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:02:45.604 LINK concat_ut 00:02:45.604 CXX test/cpp_headers/vmd.o 00:02:45.869 CXX test/cpp_headers/xor.o 00:02:45.869 LINK bdev_ut 00:02:45.869 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:02:45.869 CXX test/cpp_headers/zipf.o 00:02:46.128 CC test/unit/lib/notify/notify.c/notify_ut.o 00:02:46.128 LINK json_util_ut 00:02:46.128 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:02:46.386 LINK raid1_ut 00:02:46.386 LINK json_write_ut 00:02:46.386 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:02:46.644 LINK notify_ut 00:02:46.644 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:02:46.644 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:02:46.644 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:02:46.902 LINK json_parse_ut 00:02:47.160 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:02:47.160 LINK lvol_ut 00:02:47.160 LINK dev_ut 00:02:47.160 LINK scsi_ut 00:02:47.419 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:02:47.419 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:02:47.761 CC test/unit/lib/sock/sock.c/sock_ut.o 00:02:47.761 LINK lun_ut 00:02:48.052 CC test/unit/lib/thread/thread.c/thread_ut.o 00:02:48.052 LINK raid5f_ut 00:02:48.052 LINK nvme_ut 00:02:48.052 LINK bdev_nvme_ut 00:02:48.324 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:02:48.324 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:02:48.324 LINK iscsi_ut 00:02:48.324 CC test/unit/lib/util/base64.c/base64_ut.o 00:02:48.583 LINK scsi_bdev_ut 00:02:48.583 LINK base64_ut 00:02:48.842 CC test/unit/lib/iscsi/param.c/param_ut.o 00:02:48.842 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:02:48.842 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:02:48.842 LINK scsi_pr_ut 00:02:49.101 LINK blob_ut 00:02:49.101 LINK iobuf_ut 00:02:49.101 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:02:49.101 LINK cpuset_ut 00:02:49.360 LINK sock_ut 00:02:49.360 LINK crc16_ut 00:02:49.360 LINK param_ut 00:02:49.360 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:02:49.360 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:02:49.360 LINK bit_array_ut 00:02:49.360 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:02:49.619 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:02:49.619 LINK crc32_ieee_ut 00:02:49.619 CC test/unit/lib/sock/posix.c/posix_ut.o 00:02:49.619 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:02:49.619 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:02:49.877 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:02:49.877 LINK crc32c_ut 00:02:50.135 LINK portal_grp_ut 00:02:50.135 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:02:50.393 LINK crc64_ut 00:02:50.393 LINK tgt_node_ut 00:02:50.393 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:02:50.393 CC test/unit/lib/util/dif.c/dif_ut.o 00:02:50.652 LINK thread_ut 00:02:50.652 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:02:50.652 LINK posix_ut 00:02:50.652 LINK nvme_ns_ut 00:02:50.910 LINK nvme_ctrlr_ocssd_cmd_ut 00:02:50.910 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:02:50.910 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:02:50.910 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:02:51.169 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:02:51.169 LINK nvme_ctrlr_cmd_ut 00:02:51.169 LINK tcp_ut 00:02:51.427 LINK nvme_ctrlr_ut 00:02:51.427 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:02:51.686 CC test/unit/lib/util/iov.c/iov_ut.o 00:02:51.686 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:02:51.945 LINK dif_ut 00:02:51.945 LINK iov_ut 00:02:52.204 CC test/unit/lib/util/math.c/math_ut.o 00:02:52.204 LINK ctrlr_bdev_ut 00:02:52.204 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:02:52.462 LINK math_ut 00:02:52.462 LINK nvmf_ut 00:02:52.462 CC test/unit/lib/util/string.c/string_ut.o 00:02:52.462 CC test/unit/lib/util/xor.c/xor_ut.o 00:02:52.721 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:02:52.721 LINK pipe_ut 00:02:52.721 LINK string_ut 00:02:52.979 LINK xor_ut 00:02:52.979 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:02:52.979 LINK ctrlr_discovery_ut 00:02:52.979 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:02:53.238 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:02:53.238 LINK subsystem_ut 00:02:53.238 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:02:53.496 LINK pci_event_ut 00:02:53.755 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:02:53.755 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:02:53.755 LINK subsystem_ut 00:02:54.013 LINK nvme_ns_cmd_ut 00:02:54.013 LINK ctrlr_ut 00:02:54.013 LINK rpc_ut 00:02:54.271 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:02:54.271 LINK nvme_poll_group_ut 00:02:54.271 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:02:54.271 LINK idxd_user_ut 00:02:54.271 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:02:54.530 CC test/unit/lib/rdma/common.c/common_ut.o 00:02:54.530 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:02:54.530 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:02:54.530 LINK nvme_ns_ocssd_cmd_ut 00:02:54.787 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:02:55.046 LINK ftl_l2p_ut 00:02:55.046 LINK common_ut 00:02:55.046 LINK nvme_pcie_ut 00:02:55.046 LINK nvme_quirks_ut 00:02:55.305 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:02:55.305 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:02:55.305 LINK rdma_ut 00:02:55.305 LINK idxd_ut 00:02:55.305 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:02:55.305 LINK transport_ut 00:02:55.305 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:02:55.563 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:02:55.563 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:02:55.824 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:02:55.824 LINK ftl_io_ut 00:02:55.824 LINK nvme_qpair_ut 00:02:56.085 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:02:56.085 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:02:56.343 LINK nvme_io_msg_ut 00:02:56.343 LINK ftl_bitmap_ut 00:02:56.602 LINK nvme_transport_ut 00:02:56.602 LINK vhost_ut 00:02:56.602 LINK nvme_opal_ut 00:02:56.602 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:02:56.602 LINK ftl_band_ut 00:02:56.602 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:02:56.602 LINK nvme_fabric_ut 00:02:56.862 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:02:56.862 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:02:56.862 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:02:57.121 LINK ftl_mempool_ut 00:02:57.380 LINK nvme_pcie_common_ut 00:02:57.380 LINK ftl_mngt_ut 00:02:57.948 LINK nvme_tcp_ut 00:02:58.516 LINK ftl_layout_upgrade_ut 00:02:58.516 LINK nvme_cuse_ut 00:02:58.516 LINK ftl_sb_ut 00:02:58.774 LINK nvme_rdma_ut 00:02:58.774 00:02:58.774 real 1m58.216s 00:02:58.774 user 10m3.745s 00:02:58.774 sys 2m7.937s 00:02:58.774 04:43:22 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:58.774 04:43:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.774 ************************************ 00:02:58.774 END TEST unittest_build 00:02:58.774 ************************************ 00:02:59.033 04:43:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:59.033 04:43:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:59.033 04:43:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:59.033 04:43:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:59.033 04:43:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:59.033 04:43:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:59.033 04:43:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:59.033 04:43:22 -- scripts/common.sh@335 -- # IFS=.-: 00:02:59.033 04:43:22 -- scripts/common.sh@335 -- # read -ra ver1 00:02:59.033 04:43:22 -- scripts/common.sh@336 -- # IFS=.-: 00:02:59.033 04:43:22 -- scripts/common.sh@336 -- # read -ra ver2 00:02:59.033 04:43:22 -- scripts/common.sh@337 -- # local 'op=<' 00:02:59.033 04:43:22 -- scripts/common.sh@339 -- # ver1_l=2 00:02:59.033 04:43:22 -- scripts/common.sh@340 -- # ver2_l=1 00:02:59.033 04:43:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:59.033 04:43:22 -- scripts/common.sh@343 -- # case "$op" in 00:02:59.033 04:43:22 -- scripts/common.sh@344 -- # : 1 00:02:59.033 04:43:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:59.033 04:43:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.033 04:43:22 -- scripts/common.sh@364 -- # decimal 1 00:02:59.033 04:43:22 -- scripts/common.sh@352 -- # local d=1 00:02:59.033 04:43:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:59.033 04:43:22 -- scripts/common.sh@354 -- # echo 1 00:02:59.033 04:43:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:59.033 04:43:22 -- scripts/common.sh@365 -- # decimal 2 00:02:59.033 04:43:22 -- scripts/common.sh@352 -- # local d=2 00:02:59.033 04:43:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:59.033 04:43:22 -- scripts/common.sh@354 -- # echo 2 00:02:59.033 04:43:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:59.033 04:43:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:59.033 04:43:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:59.033 04:43:22 -- scripts/common.sh@367 -- # return 0 00:02:59.033 04:43:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:59.033 04:43:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:59.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.033 --rc genhtml_branch_coverage=1 00:02:59.033 --rc genhtml_function_coverage=1 00:02:59.033 --rc genhtml_legend=1 00:02:59.033 --rc geninfo_all_blocks=1 00:02:59.033 --rc geninfo_unexecuted_blocks=1 00:02:59.033 00:02:59.033 ' 00:02:59.033 04:43:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:59.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.033 --rc genhtml_branch_coverage=1 00:02:59.033 --rc genhtml_function_coverage=1 00:02:59.033 --rc genhtml_legend=1 00:02:59.033 --rc geninfo_all_blocks=1 00:02:59.033 --rc geninfo_unexecuted_blocks=1 00:02:59.033 00:02:59.033 ' 00:02:59.033 04:43:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:59.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.033 --rc genhtml_branch_coverage=1 00:02:59.033 --rc genhtml_function_coverage=1 00:02:59.033 --rc genhtml_legend=1 00:02:59.033 --rc geninfo_all_blocks=1 00:02:59.033 --rc geninfo_unexecuted_blocks=1 00:02:59.033 00:02:59.033 ' 00:02:59.033 04:43:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:59.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.033 --rc genhtml_branch_coverage=1 00:02:59.033 --rc genhtml_function_coverage=1 00:02:59.033 --rc genhtml_legend=1 00:02:59.033 --rc geninfo_all_blocks=1 00:02:59.033 --rc geninfo_unexecuted_blocks=1 00:02:59.033 00:02:59.033 ' 00:02:59.033 04:43:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:59.033 04:43:22 -- nvmf/common.sh@7 -- # uname -s 00:02:59.033 04:43:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:59.033 04:43:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:59.033 04:43:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:59.033 04:43:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:59.033 04:43:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:59.033 04:43:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:59.033 04:43:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:59.033 04:43:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:59.033 04:43:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:59.033 04:43:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:59.033 04:43:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7e74b746-ded7-4dde-a22d-3af59a1bbf22 00:02:59.033 04:43:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=7e74b746-ded7-4dde-a22d-3af59a1bbf22 00:02:59.033 04:43:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:59.033 04:43:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:59.033 04:43:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:59.033 04:43:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:59.033 04:43:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:59.033 04:43:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.033 04:43:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.033 04:43:22 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:59.033 04:43:22 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:59.034 04:43:22 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:59.034 04:43:22 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:59.034 04:43:22 -- paths/export.sh@6 -- # export PATH 00:02:59.034 04:43:22 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:59.034 04:43:22 -- nvmf/common.sh@46 -- # : 0 00:02:59.034 04:43:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:59.034 04:43:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:59.034 04:43:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:59.034 04:43:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:59.034 04:43:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:59.034 04:43:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:59.034 04:43:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:59.034 04:43:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:59.034 04:43:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:59.034 04:43:22 -- spdk/autotest.sh@32 -- # uname -s 00:02:59.034 04:43:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:59.034 04:43:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:02:59.034 04:43:22 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:59.034 04:43:22 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:59.034 04:43:22 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:59.034 04:43:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:59.034 04:43:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:59.034 04:43:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:02:59.034 04:43:22 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:02:59.034 04:43:22 -- spdk/autotest.sh@48 -- # udevadm_pid=51329 00:02:59.034 04:43:22 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:59.034 04:43:22 -- spdk/autotest.sh@54 -- # echo 51358 00:02:59.034 04:43:22 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:59.034 04:43:22 -- spdk/autotest.sh@56 -- # echo 51367 00:02:59.034 04:43:22 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:59.034 04:43:22 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:59.034 04:43:22 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:59.034 04:43:22 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:59.034 04:43:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:59.034 04:43:22 -- common/autotest_common.sh@10 -- # set +x 00:02:59.034 04:43:22 -- spdk/autotest.sh@70 -- # create_test_list 00:02:59.034 04:43:22 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:59.034 04:43:22 -- common/autotest_common.sh@10 -- # set +x 00:02:59.292 04:43:22 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:59.292 04:43:22 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:59.292 04:43:22 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:59.292 04:43:22 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:59.292 04:43:22 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:59.292 04:43:22 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:59.292 04:43:22 -- common/autotest_common.sh@1450 -- # uname 00:02:59.292 04:43:22 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:59.292 04:43:22 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:59.292 04:43:22 -- common/autotest_common.sh@1470 -- # uname 00:02:59.292 04:43:22 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:59.292 04:43:22 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:59.292 04:43:22 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:59.292 lcov: LCOV version 1.15 00:02:59.292 04:43:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:14.238 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:14.238 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:14.238 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:14.238 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:14.238 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:14.238 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:00.960 04:44:17 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:00.960 04:44:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.960 04:44:17 -- common/autotest_common.sh@10 -- # set +x 00:04:00.960 04:44:17 -- spdk/autotest.sh@89 -- # rm -f 00:04:00.960 04:44:17 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:00.960 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:00.960 04:44:17 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:00.960 04:44:17 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:00.960 04:44:17 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:00.960 04:44:17 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:00.960 04:44:17 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:00.960 04:44:17 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:00.960 04:44:17 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:00.960 04:44:17 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.960 04:44:17 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:00.960 04:44:17 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:00.960 04:44:17 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:04:00.960 04:44:17 -- spdk/autotest.sh@108 -- # grep -v p 00:04:00.960 04:44:17 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:00.960 04:44:17 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:00.960 04:44:17 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:00.960 04:44:17 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:00.960 04:44:17 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:00.960 No valid GPT data, bailing 00:04:00.960 04:44:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.960 04:44:18 -- scripts/common.sh@393 -- # pt= 00:04:00.960 04:44:18 -- scripts/common.sh@394 -- # return 1 00:04:00.960 04:44:18 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:00.960 1+0 records in 00:04:00.960 1+0 records out 00:04:00.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520768 s, 201 MB/s 00:04:00.960 04:44:18 -- spdk/autotest.sh@116 -- # sync 00:04:00.960 04:44:18 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.960 04:44:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.960 04:44:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:00.960 04:44:19 -- spdk/autotest.sh@122 -- # uname -s 00:04:00.960 04:44:19 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:00.960 04:44:19 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:00.960 04:44:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.960 04:44:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.960 04:44:19 -- common/autotest_common.sh@10 -- # set +x 00:04:00.960 ************************************ 00:04:00.960 START TEST setup.sh 00:04:00.960 ************************************ 00:04:00.960 04:44:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:00.960 * Looking for test storage... 00:04:00.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:00.960 04:44:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:00.960 04:44:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:00.960 04:44:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:00.960 04:44:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:00.960 04:44:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:00.960 04:44:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:00.960 04:44:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:00.961 04:44:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:00.961 04:44:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:00.961 04:44:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.961 04:44:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:00.961 04:44:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:00.961 04:44:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:00.961 04:44:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:00.961 04:44:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:00.961 04:44:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:00.961 04:44:19 -- scripts/common.sh@344 -- # : 1 00:04:00.961 04:44:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:00.961 04:44:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.961 04:44:19 -- scripts/common.sh@364 -- # decimal 1 00:04:00.961 04:44:19 -- scripts/common.sh@352 -- # local d=1 00:04:00.961 04:44:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.961 04:44:19 -- scripts/common.sh@354 -- # echo 1 00:04:00.961 04:44:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:00.961 04:44:19 -- scripts/common.sh@365 -- # decimal 2 00:04:00.961 04:44:19 -- scripts/common.sh@352 -- # local d=2 00:04:00.961 04:44:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.961 04:44:19 -- scripts/common.sh@354 -- # echo 2 00:04:00.961 04:44:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:00.961 04:44:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:00.961 04:44:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:00.961 04:44:19 -- scripts/common.sh@367 -- # return 0 00:04:00.961 04:44:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.961 04:44:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.961 --rc genhtml_branch_coverage=1 00:04:00.961 --rc genhtml_function_coverage=1 00:04:00.961 --rc genhtml_legend=1 00:04:00.961 --rc geninfo_all_blocks=1 00:04:00.961 --rc geninfo_unexecuted_blocks=1 00:04:00.961 00:04:00.961 ' 00:04:00.961 04:44:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.961 --rc genhtml_branch_coverage=1 00:04:00.961 --rc genhtml_function_coverage=1 00:04:00.961 --rc genhtml_legend=1 00:04:00.961 --rc geninfo_all_blocks=1 00:04:00.961 --rc geninfo_unexecuted_blocks=1 00:04:00.961 00:04:00.961 ' 00:04:00.961 04:44:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.961 --rc genhtml_branch_coverage=1 00:04:00.961 --rc genhtml_function_coverage=1 00:04:00.961 --rc genhtml_legend=1 00:04:00.961 --rc geninfo_all_blocks=1 00:04:00.961 --rc geninfo_unexecuted_blocks=1 00:04:00.961 00:04:00.961 ' 00:04:00.961 04:44:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.961 --rc genhtml_branch_coverage=1 00:04:00.961 --rc genhtml_function_coverage=1 00:04:00.961 --rc genhtml_legend=1 00:04:00.961 --rc geninfo_all_blocks=1 00:04:00.961 --rc geninfo_unexecuted_blocks=1 00:04:00.961 00:04:00.961 ' 00:04:00.961 04:44:19 -- setup/test-setup.sh@10 -- # uname -s 00:04:00.961 04:44:19 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:00.961 04:44:19 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:00.961 04:44:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.961 04:44:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.961 04:44:19 -- common/autotest_common.sh@10 -- # set +x 00:04:00.961 ************************************ 00:04:00.961 START TEST acl 00:04:00.961 ************************************ 00:04:00.961 04:44:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:00.961 * Looking for test storage... 00:04:00.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:00.961 04:44:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:00.961 04:44:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:00.961 04:44:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:00.961 04:44:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:00.961 04:44:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:00.961 04:44:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:00.961 04:44:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:00.961 04:44:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:00.961 04:44:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:00.961 04:44:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.961 04:44:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:00.961 04:44:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:00.961 04:44:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:00.961 04:44:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:00.961 04:44:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:00.961 04:44:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:00.961 04:44:19 -- scripts/common.sh@344 -- # : 1 00:04:00.961 04:44:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:00.961 04:44:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.961 04:44:19 -- scripts/common.sh@364 -- # decimal 1 00:04:00.961 04:44:19 -- scripts/common.sh@352 -- # local d=1 00:04:00.961 04:44:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.961 04:44:19 -- scripts/common.sh@354 -- # echo 1 00:04:00.961 04:44:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:00.961 04:44:19 -- scripts/common.sh@365 -- # decimal 2 00:04:00.961 04:44:19 -- scripts/common.sh@352 -- # local d=2 00:04:00.961 04:44:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.961 04:44:19 -- scripts/common.sh@354 -- # echo 2 00:04:00.961 04:44:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:00.961 04:44:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:00.961 04:44:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:00.961 04:44:19 -- scripts/common.sh@367 -- # return 0 00:04:00.961 04:44:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.961 04:44:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.961 --rc genhtml_branch_coverage=1 00:04:00.961 --rc genhtml_function_coverage=1 00:04:00.961 --rc genhtml_legend=1 00:04:00.961 --rc geninfo_all_blocks=1 00:04:00.961 --rc geninfo_unexecuted_blocks=1 00:04:00.961 00:04:00.961 ' 00:04:00.961 04:44:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.961 --rc genhtml_branch_coverage=1 00:04:00.961 --rc genhtml_function_coverage=1 00:04:00.961 --rc genhtml_legend=1 00:04:00.961 --rc geninfo_all_blocks=1 00:04:00.961 --rc geninfo_unexecuted_blocks=1 00:04:00.961 00:04:00.961 ' 00:04:00.961 04:44:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.961 --rc genhtml_branch_coverage=1 00:04:00.961 --rc genhtml_function_coverage=1 00:04:00.961 --rc genhtml_legend=1 00:04:00.961 --rc geninfo_all_blocks=1 00:04:00.961 --rc geninfo_unexecuted_blocks=1 00:04:00.961 00:04:00.961 ' 00:04:00.961 04:44:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:00.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.961 --rc genhtml_branch_coverage=1 00:04:00.961 --rc genhtml_function_coverage=1 00:04:00.961 --rc genhtml_legend=1 00:04:00.961 --rc geninfo_all_blocks=1 00:04:00.961 --rc geninfo_unexecuted_blocks=1 00:04:00.961 00:04:00.961 ' 00:04:00.961 04:44:19 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:00.961 04:44:19 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:00.961 04:44:19 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:00.961 04:44:19 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:00.961 04:44:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:00.961 04:44:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:00.961 04:44:19 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:00.961 04:44:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.961 04:44:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:00.961 04:44:19 -- setup/acl.sh@12 -- # devs=() 00:04:00.961 04:44:19 -- setup/acl.sh@12 -- # declare -a devs 00:04:00.961 04:44:19 -- setup/acl.sh@13 -- # drivers=() 00:04:00.961 04:44:19 -- setup/acl.sh@13 -- # declare -A drivers 00:04:00.961 04:44:19 -- setup/acl.sh@51 -- # setup reset 00:04:00.961 04:44:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.961 04:44:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.961 04:44:20 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:00.961 04:44:20 -- setup/acl.sh@16 -- # local dev driver 00:04:00.961 04:44:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.961 04:44:20 -- setup/acl.sh@15 -- # setup output status 00:04:00.961 04:44:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.961 04:44:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.961 Hugepages 00:04:00.961 node hugesize free / total 00:04:00.961 04:44:20 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:00.961 04:44:20 -- setup/acl.sh@19 -- # continue 00:04:00.961 04:44:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.961 00:04:00.961 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.961 04:44:20 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:00.961 04:44:20 -- setup/acl.sh@19 -- # continue 00:04:00.961 04:44:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.961 04:44:20 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:00.961 04:44:20 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:00.961 04:44:20 -- setup/acl.sh@20 -- # continue 00:04:00.961 04:44:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.961 04:44:20 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:00.962 04:44:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:00.962 04:44:20 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:00.962 04:44:20 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:00.962 04:44:20 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:00.962 04:44:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.962 04:44:20 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:00.962 04:44:20 -- setup/acl.sh@54 -- # run_test denied denied 00:04:00.962 04:44:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.962 04:44:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.962 04:44:20 -- common/autotest_common.sh@10 -- # set +x 00:04:00.962 ************************************ 00:04:00.962 START TEST denied 00:04:00.962 ************************************ 00:04:00.962 04:44:20 -- common/autotest_common.sh@1114 -- # denied 00:04:00.962 04:44:20 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:00.962 04:44:20 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:00.962 04:44:20 -- setup/acl.sh@38 -- # setup output config 00:04:00.962 04:44:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.962 04:44:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.962 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:00.962 04:44:21 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:00.962 04:44:21 -- setup/acl.sh@28 -- # local dev driver 00:04:00.962 04:44:21 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.962 04:44:21 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:00.962 04:44:21 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:00.962 04:44:21 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.962 04:44:21 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.962 04:44:21 -- setup/acl.sh@41 -- # setup reset 00:04:00.962 04:44:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.962 04:44:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.962 00:04:00.962 real 0m1.433s 00:04:00.962 user 0m0.403s 00:04:00.962 sys 0m1.097s 00:04:00.962 04:44:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.962 04:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:00.962 ************************************ 00:04:00.962 END TEST denied 00:04:00.962 ************************************ 00:04:00.962 04:44:22 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:00.962 04:44:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.962 04:44:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.962 04:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:00.962 ************************************ 00:04:00.962 START TEST allowed 00:04:00.962 ************************************ 00:04:00.962 04:44:22 -- common/autotest_common.sh@1114 -- # allowed 00:04:00.962 04:44:22 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:00.962 04:44:22 -- setup/acl.sh@45 -- # setup output config 00:04:00.962 04:44:22 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:00.962 04:44:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.962 04:44:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.962 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.962 04:44:23 -- setup/acl.sh@47 -- # verify 00:04:00.962 04:44:23 -- setup/acl.sh@28 -- # local dev driver 00:04:00.962 04:44:23 -- setup/acl.sh@48 -- # setup reset 00:04:00.962 04:44:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.962 04:44:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.962 00:04:00.962 real 0m1.588s 00:04:00.962 user 0m0.365s 00:04:00.962 sys 0m1.263s 00:04:00.962 04:44:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.962 ************************************ 00:04:00.962 END TEST allowed 00:04:00.962 04:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:00.962 ************************************ 00:04:00.962 ************************************ 00:04:00.962 END TEST acl 00:04:00.962 ************************************ 00:04:00.962 00:04:00.962 real 0m4.076s 00:04:00.962 user 0m1.206s 00:04:00.962 sys 0m3.043s 00:04:00.962 04:44:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.962 04:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:00.962 04:44:23 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:00.962 04:44:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.962 04:44:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.962 04:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:00.962 ************************************ 00:04:00.962 START TEST hugepages 00:04:00.962 ************************************ 00:04:00.962 04:44:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:00.962 * Looking for test storage... 00:04:00.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:00.962 04:44:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:00.962 04:44:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:00.962 04:44:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:00.962 04:44:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:00.962 04:44:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:00.962 04:44:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:00.962 04:44:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:00.962 04:44:24 -- scripts/common.sh@335 -- # IFS=.-: 00:04:00.962 04:44:24 -- scripts/common.sh@335 -- # read -ra ver1 00:04:00.962 04:44:24 -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.962 04:44:24 -- scripts/common.sh@336 -- # read -ra ver2 00:04:00.962 04:44:24 -- scripts/common.sh@337 -- # local 'op=<' 00:04:00.962 04:44:24 -- scripts/common.sh@339 -- # ver1_l=2 00:04:00.962 04:44:24 -- scripts/common.sh@340 -- # ver2_l=1 00:04:00.962 04:44:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:00.962 04:44:24 -- scripts/common.sh@343 -- # case "$op" in 00:04:00.962 04:44:24 -- scripts/common.sh@344 -- # : 1 00:04:00.962 04:44:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:00.962 04:44:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.962 04:44:24 -- scripts/common.sh@364 -- # decimal 1 00:04:00.962 04:44:24 -- scripts/common.sh@352 -- # local d=1 00:04:00.962 04:44:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.962 04:44:24 -- scripts/common.sh@354 -- # echo 1 00:04:00.962 04:44:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:00.962 04:44:24 -- scripts/common.sh@365 -- # decimal 2 00:04:00.962 04:44:24 -- scripts/common.sh@352 -- # local d=2 00:04:00.962 04:44:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.962 04:44:24 -- scripts/common.sh@354 -- # echo 2 00:04:00.962 04:44:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:00.962 04:44:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:00.962 04:44:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:00.962 04:44:24 -- scripts/common.sh@367 -- # return 0 00:04:00.962 04:44:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.962 04:44:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:00.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.962 --rc genhtml_branch_coverage=1 00:04:00.962 --rc genhtml_function_coverage=1 00:04:00.962 --rc genhtml_legend=1 00:04:00.962 --rc geninfo_all_blocks=1 00:04:00.962 --rc geninfo_unexecuted_blocks=1 00:04:00.962 00:04:00.962 ' 00:04:00.962 04:44:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:00.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.962 --rc genhtml_branch_coverage=1 00:04:00.962 --rc genhtml_function_coverage=1 00:04:00.962 --rc genhtml_legend=1 00:04:00.962 --rc geninfo_all_blocks=1 00:04:00.962 --rc geninfo_unexecuted_blocks=1 00:04:00.962 00:04:00.962 ' 00:04:00.962 04:44:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:00.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.962 --rc genhtml_branch_coverage=1 00:04:00.962 --rc genhtml_function_coverage=1 00:04:00.962 --rc genhtml_legend=1 00:04:00.962 --rc geninfo_all_blocks=1 00:04:00.962 --rc geninfo_unexecuted_blocks=1 00:04:00.962 00:04:00.962 ' 00:04:00.962 04:44:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:00.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.962 --rc genhtml_branch_coverage=1 00:04:00.962 --rc genhtml_function_coverage=1 00:04:00.962 --rc genhtml_legend=1 00:04:00.962 --rc geninfo_all_blocks=1 00:04:00.962 --rc geninfo_unexecuted_blocks=1 00:04:00.962 00:04:00.962 ' 00:04:00.962 04:44:24 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:00.962 04:44:24 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:00.962 04:44:24 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:00.962 04:44:24 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:00.962 04:44:24 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:00.962 04:44:24 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:00.962 04:44:24 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:00.962 04:44:24 -- setup/common.sh@18 -- # local node= 00:04:00.962 04:44:24 -- setup/common.sh@19 -- # local var val 00:04:00.962 04:44:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.962 04:44:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.962 04:44:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.962 04:44:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.962 04:44:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.962 04:44:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.962 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 2947872 kB' 'MemAvailable: 7333032 kB' 'Buffers: 35452 kB' 'Cached: 4501728 kB' 'SwapCached: 0 kB' 'Active: 414672 kB' 'Inactive: 4237704 kB' 'Active(anon): 126548 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237704 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 144196 kB' 'Mapped: 58408 kB' 'Shmem: 2604 kB' 'KReclaimable: 181084 kB' 'Slab: 261932 kB' 'SReclaimable: 181084 kB' 'SUnreclaim: 80848 kB' 'KernelStack: 5104 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4026008 kB' 'Committed_AS: 387128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20168 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.963 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.963 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # continue 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.964 04:44:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.964 04:44:24 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:00.964 04:44:24 -- setup/common.sh@33 -- # echo 2048 00:04:00.964 04:44:24 -- setup/common.sh@33 -- # return 0 00:04:00.964 04:44:24 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:00.964 04:44:24 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:00.964 04:44:24 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:00.964 04:44:24 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:00.964 04:44:24 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:00.964 04:44:24 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:00.964 04:44:24 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:00.964 04:44:24 -- setup/hugepages.sh@207 -- # get_nodes 00:04:00.964 04:44:24 -- setup/hugepages.sh@27 -- # local node 00:04:00.964 04:44:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.964 04:44:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:00.964 04:44:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:00.964 04:44:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.964 04:44:24 -- setup/hugepages.sh@208 -- # clear_hp 00:04:00.964 04:44:24 -- setup/hugepages.sh@37 -- # local node hp 00:04:00.964 04:44:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.964 04:44:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.964 04:44:24 -- setup/hugepages.sh@41 -- # echo 0 00:04:00.964 04:44:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.964 04:44:24 -- setup/hugepages.sh@41 -- # echo 0 00:04:00.964 04:44:24 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:00.964 04:44:24 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:00.964 04:44:24 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:00.964 04:44:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.964 04:44:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.964 04:44:24 -- common/autotest_common.sh@10 -- # set +x 00:04:00.964 ************************************ 00:04:00.964 START TEST default_setup 00:04:00.964 ************************************ 00:04:00.964 04:44:24 -- common/autotest_common.sh@1114 -- # default_setup 00:04:00.964 04:44:24 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:00.964 04:44:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.964 04:44:24 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:00.964 04:44:24 -- setup/hugepages.sh@51 -- # shift 00:04:00.964 04:44:24 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:00.964 04:44:24 -- setup/hugepages.sh@52 -- # local node_ids 00:04:00.964 04:44:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.964 04:44:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.964 04:44:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:00.964 04:44:24 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:00.964 04:44:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.964 04:44:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.964 04:44:24 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.964 04:44:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.964 04:44:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.964 04:44:24 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:00.964 04:44:24 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.964 04:44:24 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:00.964 04:44:24 -- setup/hugepages.sh@73 -- # return 0 00:04:00.964 04:44:24 -- setup/hugepages.sh@137 -- # setup output 00:04:00.964 04:44:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.964 04:44:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:01.223 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.795 04:44:25 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:01.795 04:44:25 -- setup/hugepages.sh@89 -- # local node 00:04:01.795 04:44:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.795 04:44:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.795 04:44:25 -- setup/hugepages.sh@92 -- # local surp 00:04:01.795 04:44:25 -- setup/hugepages.sh@93 -- # local resv 00:04:01.795 04:44:25 -- setup/hugepages.sh@94 -- # local anon 00:04:01.795 04:44:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.795 04:44:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.795 04:44:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.795 04:44:25 -- setup/common.sh@18 -- # local node= 00:04:01.795 04:44:25 -- setup/common.sh@19 -- # local var val 00:04:01.795 04:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.795 04:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.795 04:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.795 04:44:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.795 04:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.795 04:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5041048 kB' 'MemAvailable: 9426208 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 416088 kB' 'Inactive: 4237704 kB' 'Active(anon): 127964 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237704 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145516 kB' 'Mapped: 58388 kB' 'Shmem: 2596 kB' 'KReclaimable: 181084 kB' 'Slab: 262100 kB' 'SReclaimable: 181084 kB' 'SUnreclaim: 81016 kB' 'KernelStack: 5072 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.795 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.795 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.796 04:44:25 -- setup/common.sh@33 -- # echo 0 00:04:01.796 04:44:25 -- setup/common.sh@33 -- # return 0 00:04:01.796 04:44:25 -- setup/hugepages.sh@97 -- # anon=0 00:04:01.796 04:44:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.796 04:44:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.796 04:44:25 -- setup/common.sh@18 -- # local node= 00:04:01.796 04:44:25 -- setup/common.sh@19 -- # local var val 00:04:01.796 04:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.796 04:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.796 04:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.796 04:44:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.796 04:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.796 04:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5041048 kB' 'MemAvailable: 9426212 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415668 kB' 'Inactive: 4237708 kB' 'Active(anon): 127544 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145376 kB' 'Mapped: 58372 kB' 'Shmem: 2596 kB' 'KReclaimable: 181084 kB' 'Slab: 262100 kB' 'SReclaimable: 181084 kB' 'SUnreclaim: 81016 kB' 'KernelStack: 5056 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.796 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.796 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.797 04:44:25 -- setup/common.sh@33 -- # echo 0 00:04:01.797 04:44:25 -- setup/common.sh@33 -- # return 0 00:04:01.797 04:44:25 -- setup/hugepages.sh@99 -- # surp=0 00:04:01.797 04:44:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.797 04:44:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.797 04:44:25 -- setup/common.sh@18 -- # local node= 00:04:01.797 04:44:25 -- setup/common.sh@19 -- # local var val 00:04:01.797 04:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.797 04:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.797 04:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.797 04:44:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.797 04:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.797 04:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5041048 kB' 'MemAvailable: 9426212 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415668 kB' 'Inactive: 4237708 kB' 'Active(anon): 127544 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145360 kB' 'Mapped: 58372 kB' 'Shmem: 2596 kB' 'KReclaimable: 181084 kB' 'Slab: 262100 kB' 'SReclaimable: 181084 kB' 'SUnreclaim: 81016 kB' 'KernelStack: 5056 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.797 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.797 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.798 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.798 04:44:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.799 04:44:25 -- setup/common.sh@33 -- # echo 0 00:04:01.799 04:44:25 -- setup/common.sh@33 -- # return 0 00:04:01.799 04:44:25 -- setup/hugepages.sh@100 -- # resv=0 00:04:01.799 nr_hugepages=1024 00:04:01.799 04:44:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.799 resv_hugepages=0 00:04:01.799 04:44:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.799 surplus_hugepages=0 00:04:01.799 04:44:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.799 anon_hugepages=0 00:04:01.799 04:44:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.799 04:44:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.799 04:44:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.799 04:44:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.799 04:44:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.799 04:44:25 -- setup/common.sh@18 -- # local node= 00:04:01.799 04:44:25 -- setup/common.sh@19 -- # local var val 00:04:01.799 04:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.799 04:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.799 04:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.799 04:44:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.799 04:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.799 04:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5040808 kB' 'MemAvailable: 9425988 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415676 kB' 'Inactive: 4237708 kB' 'Active(anon): 127552 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145364 kB' 'Mapped: 58372 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262116 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81016 kB' 'KernelStack: 5056 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.799 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.799 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.800 04:44:25 -- setup/common.sh@33 -- # echo 1024 00:04:01.800 04:44:25 -- setup/common.sh@33 -- # return 0 00:04:01.800 04:44:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.800 04:44:25 -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.800 04:44:25 -- setup/hugepages.sh@27 -- # local node 00:04:01.800 04:44:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.800 04:44:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.800 04:44:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.800 04:44:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.800 04:44:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.800 04:44:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.800 04:44:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.800 04:44:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.800 04:44:25 -- setup/common.sh@18 -- # local node=0 00:04:01.800 04:44:25 -- setup/common.sh@19 -- # local var val 00:04:01.800 04:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.800 04:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.800 04:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.800 04:44:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.800 04:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.800 04:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5040808 kB' 'MemUsed: 7205516 kB' 'SwapCached: 0 kB' 'Active: 415628 kB' 'Inactive: 4237708 kB' 'Active(anon): 127504 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 4537176 kB' 'Mapped: 58372 kB' 'AnonPages: 145324 kB' 'Shmem: 2596 kB' 'KernelStack: 5040 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181100 kB' 'Slab: 262116 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.800 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.800 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # continue 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.801 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.801 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.801 04:44:25 -- setup/common.sh@33 -- # echo 0 00:04:01.801 04:44:25 -- setup/common.sh@33 -- # return 0 00:04:01.801 04:44:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.801 04:44:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.801 04:44:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.801 04:44:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.801 node0=1024 expecting 1024 00:04:01.801 04:44:25 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.801 04:44:25 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.801 00:04:01.801 real 0m0.983s 00:04:01.801 user 0m0.294s 00:04:01.801 sys 0m0.672s 00:04:01.801 04:44:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.801 04:44:25 -- common/autotest_common.sh@10 -- # set +x 00:04:01.801 ************************************ 00:04:01.801 END TEST default_setup 00:04:01.801 ************************************ 00:04:01.801 04:44:25 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:01.801 04:44:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.801 04:44:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.801 04:44:25 -- common/autotest_common.sh@10 -- # set +x 00:04:01.801 ************************************ 00:04:01.801 START TEST per_node_1G_alloc 00:04:01.801 ************************************ 00:04:01.801 04:44:25 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:01.801 04:44:25 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:01.801 04:44:25 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:01.801 04:44:25 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:01.801 04:44:25 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.801 04:44:25 -- setup/hugepages.sh@51 -- # shift 00:04:01.801 04:44:25 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.801 04:44:25 -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.801 04:44:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.801 04:44:25 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:01.801 04:44:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.801 04:44:25 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.801 04:44:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.801 04:44:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.801 04:44:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.801 04:44:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.801 04:44:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.801 04:44:25 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.802 04:44:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.802 04:44:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:01.802 04:44:25 -- setup/hugepages.sh@73 -- # return 0 00:04:01.802 04:44:25 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:01.802 04:44:25 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:01.802 04:44:25 -- setup/hugepages.sh@146 -- # setup output 00:04:01.802 04:44:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.802 04:44:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:02.061 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.322 04:44:25 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:02.322 04:44:25 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:02.322 04:44:25 -- setup/hugepages.sh@89 -- # local node 00:04:02.322 04:44:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.322 04:44:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.322 04:44:25 -- setup/hugepages.sh@92 -- # local surp 00:04:02.322 04:44:25 -- setup/hugepages.sh@93 -- # local resv 00:04:02.322 04:44:25 -- setup/hugepages.sh@94 -- # local anon 00:04:02.322 04:44:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.322 04:44:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.322 04:44:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.322 04:44:25 -- setup/common.sh@18 -- # local node= 00:04:02.322 04:44:25 -- setup/common.sh@19 -- # local var val 00:04:02.322 04:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.322 04:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.322 04:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.322 04:44:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.322 04:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.322 04:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.322 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6093936 kB' 'MemAvailable: 10479116 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415764 kB' 'Inactive: 4237708 kB' 'Active(anon): 127640 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145540 kB' 'Mapped: 58404 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262156 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81056 kB' 'KernelStack: 5072 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.323 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.323 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.324 04:44:25 -- setup/common.sh@33 -- # echo 0 00:04:02.324 04:44:25 -- setup/common.sh@33 -- # return 0 00:04:02.324 04:44:25 -- setup/hugepages.sh@97 -- # anon=0 00:04:02.324 04:44:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.324 04:44:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.324 04:44:25 -- setup/common.sh@18 -- # local node= 00:04:02.324 04:44:25 -- setup/common.sh@19 -- # local var val 00:04:02.324 04:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.324 04:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.324 04:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.324 04:44:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.324 04:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.324 04:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6094188 kB' 'MemAvailable: 10479368 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415552 kB' 'Inactive: 4237708 kB' 'Active(anon): 127428 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145032 kB' 'Mapped: 58372 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262152 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81052 kB' 'KernelStack: 5024 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.324 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.324 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.325 04:44:25 -- setup/common.sh@33 -- # echo 0 00:04:02.325 04:44:25 -- setup/common.sh@33 -- # return 0 00:04:02.325 04:44:25 -- setup/hugepages.sh@99 -- # surp=0 00:04:02.325 04:44:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.325 04:44:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.325 04:44:25 -- setup/common.sh@18 -- # local node= 00:04:02.325 04:44:25 -- setup/common.sh@19 -- # local var val 00:04:02.325 04:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.325 04:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.325 04:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.325 04:44:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.325 04:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.325 04:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6094504 kB' 'MemAvailable: 10479684 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415524 kB' 'Inactive: 4237708 kB' 'Active(anon): 127400 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145264 kB' 'Mapped: 58372 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262152 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81052 kB' 'KernelStack: 5008 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.325 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.325 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.326 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.326 04:44:25 -- setup/common.sh@33 -- # echo 0 00:04:02.326 04:44:25 -- setup/common.sh@33 -- # return 0 00:04:02.326 04:44:25 -- setup/hugepages.sh@100 -- # resv=0 00:04:02.326 nr_hugepages=512 00:04:02.326 04:44:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:02.326 resv_hugepages=0 00:04:02.326 04:44:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.326 surplus_hugepages=0 00:04:02.326 04:44:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.326 anon_hugepages=0 00:04:02.326 04:44:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.326 04:44:25 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:02.326 04:44:25 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:02.326 04:44:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.326 04:44:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.326 04:44:25 -- setup/common.sh@18 -- # local node= 00:04:02.326 04:44:25 -- setup/common.sh@19 -- # local var val 00:04:02.326 04:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.326 04:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.326 04:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.326 04:44:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.326 04:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.326 04:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.326 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.327 04:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6094788 kB' 'MemAvailable: 10479968 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415600 kB' 'Inactive: 4237708 kB' 'Active(anon): 127476 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145388 kB' 'Mapped: 58372 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262148 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81048 kB' 'KernelStack: 5056 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:02.327 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.327 04:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.327 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.327 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.327 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 04:44:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.586 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.586 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.586 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.586 04:44:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.587 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.587 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.588 04:44:25 -- setup/common.sh@33 -- # echo 512 00:04:02.588 04:44:25 -- setup/common.sh@33 -- # return 0 00:04:02.588 04:44:25 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:02.588 04:44:25 -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.588 04:44:25 -- setup/hugepages.sh@27 -- # local node 00:04:02.588 04:44:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.588 04:44:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.588 04:44:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.588 04:44:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.588 04:44:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.588 04:44:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.588 04:44:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.588 04:44:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.588 04:44:25 -- setup/common.sh@18 -- # local node=0 00:04:02.588 04:44:25 -- setup/common.sh@19 -- # local var val 00:04:02.588 04:44:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.588 04:44:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.588 04:44:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.588 04:44:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.588 04:44:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.588 04:44:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6094788 kB' 'MemUsed: 6151536 kB' 'SwapCached: 0 kB' 'Active: 415884 kB' 'Inactive: 4237708 kB' 'Active(anon): 127760 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 4537176 kB' 'Mapped: 58372 kB' 'AnonPages: 145380 kB' 'Shmem: 2596 kB' 'KernelStack: 5024 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181100 kB' 'Slab: 262148 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.588 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.588 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # continue 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.589 04:44:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.589 04:44:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.589 04:44:25 -- setup/common.sh@33 -- # echo 0 00:04:02.589 04:44:25 -- setup/common.sh@33 -- # return 0 00:04:02.589 04:44:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.589 04:44:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.589 04:44:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.589 04:44:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.589 node0=512 expecting 512 00:04:02.589 04:44:25 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.589 04:44:25 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:02.589 00:04:02.589 real 0m0.671s 00:04:02.589 user 0m0.263s 00:04:02.589 sys 0m0.452s 00:04:02.589 04:44:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.589 04:44:25 -- common/autotest_common.sh@10 -- # set +x 00:04:02.589 ************************************ 00:04:02.589 END TEST per_node_1G_alloc 00:04:02.589 ************************************ 00:04:02.589 04:44:25 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:02.589 04:44:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.589 04:44:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.589 04:44:25 -- common/autotest_common.sh@10 -- # set +x 00:04:02.589 ************************************ 00:04:02.589 START TEST even_2G_alloc 00:04:02.589 ************************************ 00:04:02.589 04:44:25 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:02.589 04:44:25 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:02.589 04:44:25 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.589 04:44:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.589 04:44:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.589 04:44:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.589 04:44:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.589 04:44:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.589 04:44:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.589 04:44:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.589 04:44:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.589 04:44:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.589 04:44:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.589 04:44:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.589 04:44:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.589 04:44:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.589 04:44:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:02.589 04:44:25 -- setup/hugepages.sh@83 -- # : 0 00:04:02.589 04:44:25 -- setup/hugepages.sh@84 -- # : 0 00:04:02.589 04:44:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.589 04:44:25 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:02.589 04:44:25 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:02.589 04:44:25 -- setup/hugepages.sh@153 -- # setup output 00:04:02.589 04:44:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.589 04:44:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:02.848 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.108 04:44:26 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:03.108 04:44:26 -- setup/hugepages.sh@89 -- # local node 00:04:03.108 04:44:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.108 04:44:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.108 04:44:26 -- setup/hugepages.sh@92 -- # local surp 00:04:03.108 04:44:26 -- setup/hugepages.sh@93 -- # local resv 00:04:03.108 04:44:26 -- setup/hugepages.sh@94 -- # local anon 00:04:03.108 04:44:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.108 04:44:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.108 04:44:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.108 04:44:26 -- setup/common.sh@18 -- # local node= 00:04:03.109 04:44:26 -- setup/common.sh@19 -- # local var val 00:04:03.109 04:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.109 04:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.109 04:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.109 04:44:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.109 04:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.109 04:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5050980 kB' 'MemAvailable: 9436160 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415828 kB' 'Inactive: 4237708 kB' 'Active(anon): 127704 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145592 kB' 'Mapped: 58400 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262168 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81068 kB' 'KernelStack: 5120 kB' 'PageTables: 4580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 387816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20136 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.109 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.109 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.110 04:44:26 -- setup/common.sh@33 -- # echo 0 00:04:03.110 04:44:26 -- setup/common.sh@33 -- # return 0 00:04:03.110 04:44:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.110 04:44:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.110 04:44:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.110 04:44:26 -- setup/common.sh@18 -- # local node= 00:04:03.110 04:44:26 -- setup/common.sh@19 -- # local var val 00:04:03.110 04:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.110 04:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.110 04:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.110 04:44:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.110 04:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.110 04:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5051016 kB' 'MemAvailable: 9436196 kB' 'Buffers: 35452 kB' 'Cached: 4501728 kB' 'SwapCached: 0 kB' 'Active: 415700 kB' 'Inactive: 4237708 kB' 'Active(anon): 127576 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145352 kB' 'Mapped: 58372 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262152 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81052 kB' 'KernelStack: 5040 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.110 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.110 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 04:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 04:44:26 -- setup/common.sh@33 -- # echo 0 00:04:03.373 04:44:26 -- setup/common.sh@33 -- # return 0 00:04:03.373 04:44:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.373 04:44:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.373 04:44:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.373 04:44:26 -- setup/common.sh@18 -- # local node= 00:04:03.373 04:44:26 -- setup/common.sh@19 -- # local var val 00:04:03.373 04:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.373 04:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.373 04:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.373 04:44:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.373 04:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.373 04:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5051016 kB' 'MemAvailable: 9436196 kB' 'Buffers: 35452 kB' 'Cached: 4501728 kB' 'SwapCached: 0 kB' 'Active: 415996 kB' 'Inactive: 4237708 kB' 'Active(anon): 127872 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145404 kB' 'Mapped: 58372 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262152 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81052 kB' 'KernelStack: 5056 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.373 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.374 04:44:26 -- setup/common.sh@33 -- # echo 0 00:04:03.374 04:44:26 -- setup/common.sh@33 -- # return 0 00:04:03.374 04:44:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.374 nr_hugepages=1024 00:04:03.374 04:44:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.374 resv_hugepages=0 00:04:03.374 04:44:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.374 surplus_hugepages=0 00:04:03.374 04:44:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.374 anon_hugepages=0 00:04:03.374 04:44:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.374 04:44:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.374 04:44:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.374 04:44:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.374 04:44:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.374 04:44:26 -- setup/common.sh@18 -- # local node= 00:04:03.374 04:44:26 -- setup/common.sh@19 -- # local var val 00:04:03.374 04:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.374 04:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.374 04:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.374 04:44:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.374 04:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.374 04:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5051016 kB' 'MemAvailable: 9436196 kB' 'Buffers: 35452 kB' 'Cached: 4501728 kB' 'SwapCached: 0 kB' 'Active: 415656 kB' 'Inactive: 4237708 kB' 'Active(anon): 127532 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145324 kB' 'Mapped: 58372 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262152 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81052 kB' 'KernelStack: 5056 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 04:44:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.376 04:44:26 -- setup/common.sh@33 -- # echo 1024 00:04:03.376 04:44:26 -- setup/common.sh@33 -- # return 0 00:04:03.376 04:44:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.376 04:44:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.376 04:44:26 -- setup/hugepages.sh@27 -- # local node 00:04:03.376 04:44:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.376 04:44:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.376 04:44:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.376 04:44:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.376 04:44:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.376 04:44:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.376 04:44:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.376 04:44:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.376 04:44:26 -- setup/common.sh@18 -- # local node=0 00:04:03.376 04:44:26 -- setup/common.sh@19 -- # local var val 00:04:03.376 04:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.376 04:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.376 04:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.376 04:44:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.376 04:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.376 04:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5051368 kB' 'MemUsed: 7194956 kB' 'SwapCached: 0 kB' 'Active: 415652 kB' 'Inactive: 4237708 kB' 'Active(anon): 127528 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 4537180 kB' 'Mapped: 58372 kB' 'AnonPages: 145372 kB' 'Shmem: 2596 kB' 'KernelStack: 5056 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181100 kB' 'Slab: 262152 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.376 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # continue 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 04:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 04:44:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.377 04:44:26 -- setup/common.sh@33 -- # echo 0 00:04:03.377 04:44:26 -- setup/common.sh@33 -- # return 0 00:04:03.377 04:44:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.377 04:44:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.377 04:44:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.377 04:44:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.377 04:44:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.377 node0=1024 expecting 1024 00:04:03.377 04:44:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.377 00:04:03.377 real 0m0.822s 00:04:03.377 user 0m0.258s 00:04:03.377 sys 0m0.602s 00:04:03.377 04:44:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.377 ************************************ 00:04:03.377 END TEST even_2G_alloc 00:04:03.377 ************************************ 00:04:03.377 04:44:26 -- common/autotest_common.sh@10 -- # set +x 00:04:03.377 04:44:26 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:03.377 04:44:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.377 04:44:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.377 04:44:26 -- common/autotest_common.sh@10 -- # set +x 00:04:03.377 ************************************ 00:04:03.377 START TEST odd_alloc 00:04:03.377 ************************************ 00:04:03.377 04:44:26 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:03.377 04:44:26 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:03.377 04:44:26 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:03.377 04:44:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.377 04:44:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.377 04:44:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:03.377 04:44:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.377 04:44:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.377 04:44:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.377 04:44:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:03.377 04:44:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.377 04:44:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.377 04:44:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.377 04:44:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.377 04:44:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.377 04:44:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.377 04:44:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:03.377 04:44:26 -- setup/hugepages.sh@83 -- # : 0 00:04:03.377 04:44:26 -- setup/hugepages.sh@84 -- # : 0 00:04:03.377 04:44:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.377 04:44:26 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:03.377 04:44:26 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:03.377 04:44:26 -- setup/hugepages.sh@160 -- # setup output 00:04:03.377 04:44:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.377 04:44:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:03.637 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.210 04:44:27 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:04.210 04:44:27 -- setup/hugepages.sh@89 -- # local node 00:04:04.210 04:44:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.210 04:44:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.210 04:44:27 -- setup/hugepages.sh@92 -- # local surp 00:04:04.210 04:44:27 -- setup/hugepages.sh@93 -- # local resv 00:04:04.210 04:44:27 -- setup/hugepages.sh@94 -- # local anon 00:04:04.210 04:44:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.210 04:44:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.210 04:44:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.210 04:44:27 -- setup/common.sh@18 -- # local node= 00:04:04.210 04:44:27 -- setup/common.sh@19 -- # local var val 00:04:04.210 04:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.210 04:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.210 04:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.210 04:44:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.210 04:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.210 04:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.210 04:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5044524 kB' 'MemAvailable: 9429704 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 416228 kB' 'Inactive: 4237708 kB' 'Active(anon): 128104 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145604 kB' 'Mapped: 58336 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262192 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81092 kB' 'KernelStack: 5124 kB' 'PageTables: 4664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20168 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.210 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.210 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.211 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.211 04:44:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.212 04:44:27 -- setup/common.sh@33 -- # echo 0 00:04:04.212 04:44:27 -- setup/common.sh@33 -- # return 0 00:04:04.212 04:44:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.212 04:44:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.212 04:44:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.212 04:44:27 -- setup/common.sh@18 -- # local node= 00:04:04.212 04:44:27 -- setup/common.sh@19 -- # local var val 00:04:04.212 04:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.212 04:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.212 04:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.212 04:44:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.212 04:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.212 04:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5044544 kB' 'MemAvailable: 9429724 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415792 kB' 'Inactive: 4237708 kB' 'Active(anon): 127668 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145432 kB' 'Mapped: 58332 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262192 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81092 kB' 'KernelStack: 5108 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20152 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.212 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.212 04:44:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.213 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.213 04:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.214 04:44:27 -- setup/common.sh@33 -- # echo 0 00:04:04.214 04:44:27 -- setup/common.sh@33 -- # return 0 00:04:04.214 04:44:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.214 04:44:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.214 04:44:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.214 04:44:27 -- setup/common.sh@18 -- # local node= 00:04:04.214 04:44:27 -- setup/common.sh@19 -- # local var val 00:04:04.214 04:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.214 04:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.214 04:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.214 04:44:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.214 04:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.214 04:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5044544 kB' 'MemAvailable: 9429724 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415748 kB' 'Inactive: 4237708 kB' 'Active(anon): 127624 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145344 kB' 'Mapped: 58332 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262192 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81092 kB' 'KernelStack: 5092 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20152 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.214 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.214 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.215 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.215 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.216 04:44:27 -- setup/common.sh@33 -- # echo 0 00:04:04.216 04:44:27 -- setup/common.sh@33 -- # return 0 00:04:04.216 04:44:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.216 nr_hugepages=1025 00:04:04.216 04:44:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:04.216 resv_hugepages=0 00:04:04.216 04:44:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.216 04:44:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.216 surplus_hugepages=0 00:04:04.216 anon_hugepages=0 00:04:04.216 04:44:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.216 04:44:27 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.216 04:44:27 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:04.216 04:44:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.216 04:44:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.216 04:44:27 -- setup/common.sh@18 -- # local node= 00:04:04.216 04:44:27 -- setup/common.sh@19 -- # local var val 00:04:04.216 04:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.216 04:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.216 04:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.216 04:44:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.216 04:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.216 04:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5044544 kB' 'MemAvailable: 9429724 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415752 kB' 'Inactive: 4237708 kB' 'Active(anon): 127628 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145352 kB' 'Mapped: 58332 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262184 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81084 kB' 'KernelStack: 5092 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 388204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20152 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.216 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.216 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.217 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.217 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.218 04:44:27 -- setup/common.sh@33 -- # echo 1025 00:04:04.218 04:44:27 -- setup/common.sh@33 -- # return 0 00:04:04.218 04:44:27 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.218 04:44:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.218 04:44:27 -- setup/hugepages.sh@27 -- # local node 00:04:04.218 04:44:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.218 04:44:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:04.218 04:44:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.218 04:44:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.218 04:44:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.218 04:44:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.218 04:44:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.218 04:44:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.218 04:44:27 -- setup/common.sh@18 -- # local node=0 00:04:04.218 04:44:27 -- setup/common.sh@19 -- # local var val 00:04:04.218 04:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.218 04:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.218 04:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.218 04:44:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.218 04:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.218 04:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.218 04:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5044544 kB' 'MemUsed: 7201780 kB' 'SwapCached: 0 kB' 'Active: 415748 kB' 'Inactive: 4237708 kB' 'Active(anon): 127624 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 4537176 kB' 'Mapped: 58332 kB' 'AnonPages: 145352 kB' 'Shmem: 2596 kB' 'KernelStack: 5092 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181100 kB' 'Slab: 262184 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.218 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.218 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.219 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.219 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # continue 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.220 04:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.220 04:44:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.220 04:44:27 -- setup/common.sh@33 -- # echo 0 00:04:04.220 04:44:27 -- setup/common.sh@33 -- # return 0 00:04:04.220 04:44:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.220 04:44:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.220 04:44:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.220 node0=1025 expecting 1025 00:04:04.220 04:44:27 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:04.220 04:44:27 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:04.220 00:04:04.220 real 0m0.835s 00:04:04.220 user 0m0.241s 00:04:04.220 sys 0m0.635s 00:04:04.220 04:44:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:04.220 04:44:27 -- common/autotest_common.sh@10 -- # set +x 00:04:04.220 ************************************ 00:04:04.220 END TEST odd_alloc 00:04:04.220 ************************************ 00:04:04.220 04:44:27 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:04.220 04:44:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.220 04:44:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.220 04:44:27 -- common/autotest_common.sh@10 -- # set +x 00:04:04.220 ************************************ 00:04:04.220 START TEST custom_alloc 00:04:04.220 ************************************ 00:04:04.220 04:44:27 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:04.220 04:44:27 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:04.220 04:44:27 -- setup/hugepages.sh@169 -- # local node 00:04:04.220 04:44:27 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:04.220 04:44:27 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:04.220 04:44:27 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:04.220 04:44:27 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:04.220 04:44:27 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.220 04:44:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.220 04:44:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.220 04:44:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.220 04:44:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.220 04:44:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.220 04:44:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.220 04:44:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.220 04:44:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.220 04:44:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:04.220 04:44:27 -- setup/hugepages.sh@83 -- # : 0 00:04:04.220 04:44:27 -- setup/hugepages.sh@84 -- # : 0 00:04:04.220 04:44:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:04.220 04:44:27 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:04.220 04:44:27 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:04.220 04:44:27 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:04.220 04:44:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.220 04:44:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.220 04:44:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.220 04:44:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.220 04:44:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.220 04:44:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.220 04:44:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:04.220 04:44:27 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.220 04:44:27 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:04.220 04:44:27 -- setup/hugepages.sh@78 -- # return 0 00:04:04.220 04:44:27 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:04.220 04:44:27 -- setup/hugepages.sh@187 -- # setup output 00:04:04.220 04:44:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.220 04:44:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:04.761 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.761 04:44:28 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:04.761 04:44:28 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:04.761 04:44:28 -- setup/hugepages.sh@89 -- # local node 00:04:04.761 04:44:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.761 04:44:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.761 04:44:28 -- setup/hugepages.sh@92 -- # local surp 00:04:04.761 04:44:28 -- setup/hugepages.sh@93 -- # local resv 00:04:04.761 04:44:28 -- setup/hugepages.sh@94 -- # local anon 00:04:04.761 04:44:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.761 04:44:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.761 04:44:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.761 04:44:28 -- setup/common.sh@18 -- # local node= 00:04:04.761 04:44:28 -- setup/common.sh@19 -- # local var val 00:04:04.761 04:44:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.761 04:44:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.761 04:44:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.761 04:44:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.761 04:44:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.761 04:44:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.761 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.761 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.761 04:44:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6096192 kB' 'MemAvailable: 10481372 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 416036 kB' 'Inactive: 4237708 kB' 'Active(anon): 127912 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145632 kB' 'Mapped: 58604 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262208 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81108 kB' 'KernelStack: 5052 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 388336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20168 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:04.761 04:44:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.761 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.761 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.761 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.761 04:44:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.761 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.761 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.761 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.761 04:44:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.761 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.761 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.761 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.761 04:44:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.761 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.762 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.762 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.763 04:44:28 -- setup/common.sh@33 -- # echo 0 00:04:04.763 04:44:28 -- setup/common.sh@33 -- # return 0 00:04:04.763 04:44:28 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.763 04:44:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.763 04:44:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.763 04:44:28 -- setup/common.sh@18 -- # local node= 00:04:04.763 04:44:28 -- setup/common.sh@19 -- # local var val 00:04:04.763 04:44:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.763 04:44:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.763 04:44:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.763 04:44:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.763 04:44:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.763 04:44:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6096192 kB' 'MemAvailable: 10481372 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415772 kB' 'Inactive: 4237708 kB' 'Active(anon): 127648 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145380 kB' 'Mapped: 58592 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262204 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81104 kB' 'KernelStack: 5020 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 388336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20136 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.763 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.763 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.764 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.764 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.765 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.765 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.765 04:44:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.765 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.765 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.765 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.765 04:44:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.765 04:44:28 -- setup/common.sh@32 -- # continue 00:04:04.765 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.765 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.765 04:44:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 04:44:28 -- setup/common.sh@33 -- # echo 0 00:04:05.027 04:44:28 -- setup/common.sh@33 -- # return 0 00:04:05.027 04:44:28 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.027 04:44:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.027 04:44:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.027 04:44:28 -- setup/common.sh@18 -- # local node= 00:04:05.027 04:44:28 -- setup/common.sh@19 -- # local var val 00:04:05.027 04:44:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.027 04:44:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.027 04:44:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.027 04:44:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.027 04:44:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.027 04:44:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6096192 kB' 'MemAvailable: 10481372 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415788 kB' 'Inactive: 4237708 kB' 'Active(anon): 127664 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145380 kB' 'Mapped: 58592 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262204 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81104 kB' 'KernelStack: 5020 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 388336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20152 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 04:44:28 -- setup/common.sh@33 -- # echo 0 00:04:05.029 04:44:28 -- setup/common.sh@33 -- # return 0 00:04:05.029 nr_hugepages=512 00:04:05.029 resv_hugepages=0 00:04:05.029 04:44:28 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.029 04:44:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:05.029 04:44:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.029 surplus_hugepages=0 00:04:05.029 anon_hugepages=0 00:04:05.029 04:44:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.029 04:44:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.029 04:44:28 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.029 04:44:28 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:05.029 04:44:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.029 04:44:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.029 04:44:28 -- setup/common.sh@18 -- # local node= 00:04:05.029 04:44:28 -- setup/common.sh@19 -- # local var val 00:04:05.029 04:44:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.029 04:44:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.029 04:44:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.029 04:44:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.029 04:44:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.029 04:44:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6097232 kB' 'MemAvailable: 10482412 kB' 'Buffers: 35452 kB' 'Cached: 4501724 kB' 'SwapCached: 0 kB' 'Active: 415880 kB' 'Inactive: 4237708 kB' 'Active(anon): 127756 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 145520 kB' 'Mapped: 58592 kB' 'Shmem: 2596 kB' 'KReclaimable: 181100 kB' 'Slab: 262204 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81104 kB' 'KernelStack: 5068 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 388336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20152 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 04:44:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 04:44:28 -- setup/common.sh@33 -- # echo 512 00:04:05.030 04:44:28 -- setup/common.sh@33 -- # return 0 00:04:05.030 04:44:28 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.030 04:44:28 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.030 04:44:28 -- setup/hugepages.sh@27 -- # local node 00:04:05.030 04:44:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.030 04:44:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.030 04:44:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.030 04:44:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.030 04:44:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.030 04:44:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.030 04:44:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.030 04:44:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.030 04:44:28 -- setup/common.sh@18 -- # local node=0 00:04:05.030 04:44:28 -- setup/common.sh@19 -- # local var val 00:04:05.030 04:44:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.030 04:44:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.030 04:44:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.030 04:44:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.030 04:44:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.030 04:44:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6097592 kB' 'MemUsed: 6148732 kB' 'SwapCached: 0 kB' 'Active: 416092 kB' 'Inactive: 4237708 kB' 'Active(anon): 127968 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237708 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 4537176 kB' 'Mapped: 58592 kB' 'AnonPages: 145488 kB' 'Shmem: 2596 kB' 'KernelStack: 5036 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181100 kB' 'Slab: 262204 kB' 'SReclaimable: 181100 kB' 'SUnreclaim: 81104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.030 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # continue 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 04:44:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 04:44:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 04:44:28 -- setup/common.sh@33 -- # echo 0 00:04:05.031 04:44:28 -- setup/common.sh@33 -- # return 0 00:04:05.031 04:44:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.031 04:44:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.031 04:44:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.031 04:44:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.031 04:44:28 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.031 node0=512 expecting 512 00:04:05.031 04:44:28 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.031 00:04:05.031 real 0m0.714s 00:04:05.031 user 0m0.244s 00:04:05.031 sys 0m0.479s 00:04:05.031 04:44:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.031 ************************************ 00:04:05.031 END TEST custom_alloc 00:04:05.031 ************************************ 00:04:05.031 04:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:05.031 04:44:28 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:05.031 04:44:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.031 04:44:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.031 04:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:05.031 ************************************ 00:04:05.031 START TEST no_shrink_alloc 00:04:05.031 ************************************ 00:04:05.031 04:44:28 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:05.031 04:44:28 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:05.031 04:44:28 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.031 04:44:28 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:05.031 04:44:28 -- setup/hugepages.sh@51 -- # shift 00:04:05.031 04:44:28 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:05.031 04:44:28 -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.031 04:44:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.031 04:44:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.031 04:44:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:05.031 04:44:28 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:05.032 04:44:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.032 04:44:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.032 04:44:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.032 04:44:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.032 04:44:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.032 04:44:28 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:05.032 04:44:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.032 04:44:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:05.032 04:44:28 -- setup/hugepages.sh@73 -- # return 0 00:04:05.032 04:44:28 -- setup/hugepages.sh@198 -- # setup output 00:04:05.032 04:44:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.032 04:44:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:05.291 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.861 04:44:29 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:05.861 04:44:29 -- setup/hugepages.sh@89 -- # local node 00:04:05.861 04:44:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.861 04:44:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.861 04:44:29 -- setup/hugepages.sh@92 -- # local surp 00:04:05.861 04:44:29 -- setup/hugepages.sh@93 -- # local resv 00:04:05.861 04:44:29 -- setup/hugepages.sh@94 -- # local anon 00:04:05.861 04:44:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.861 04:44:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.861 04:44:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.861 04:44:29 -- setup/common.sh@18 -- # local node= 00:04:05.861 04:44:29 -- setup/common.sh@19 -- # local var val 00:04:05.861 04:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.861 04:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.861 04:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.861 04:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.861 04:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.861 04:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5051500 kB' 'MemAvailable: 9436676 kB' 'Buffers: 35452 kB' 'Cached: 4501728 kB' 'SwapCached: 0 kB' 'Active: 414536 kB' 'Inactive: 4237712 kB' 'Active(anon): 126412 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237712 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 68 kB' 'AnonPages: 143920 kB' 'Mapped: 57508 kB' 'Shmem: 2596 kB' 'KReclaimable: 181092 kB' 'Slab: 262144 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 81052 kB' 'KernelStack: 5008 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 376376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.861 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.861 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.862 04:44:29 -- setup/common.sh@33 -- # echo 0 00:04:05.862 04:44:29 -- setup/common.sh@33 -- # return 0 00:04:05.862 04:44:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.862 04:44:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.862 04:44:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.862 04:44:29 -- setup/common.sh@18 -- # local node= 00:04:05.862 04:44:29 -- setup/common.sh@19 -- # local var val 00:04:05.862 04:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.862 04:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.862 04:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.862 04:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.862 04:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.862 04:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5051500 kB' 'MemAvailable: 9436676 kB' 'Buffers: 35452 kB' 'Cached: 4501728 kB' 'SwapCached: 0 kB' 'Active: 414212 kB' 'Inactive: 4237712 kB' 'Active(anon): 126088 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237712 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 68 kB' 'AnonPages: 143636 kB' 'Mapped: 57508 kB' 'Shmem: 2596 kB' 'KReclaimable: 181092 kB' 'Slab: 262140 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 81048 kB' 'KernelStack: 4992 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 376376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.862 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.862 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.863 04:44:29 -- setup/common.sh@33 -- # echo 0 00:04:05.863 04:44:29 -- setup/common.sh@33 -- # return 0 00:04:05.863 04:44:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.863 04:44:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.863 04:44:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.863 04:44:29 -- setup/common.sh@18 -- # local node= 00:04:05.863 04:44:29 -- setup/common.sh@19 -- # local var val 00:04:05.863 04:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.863 04:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.863 04:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.863 04:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.863 04:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.863 04:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5051248 kB' 'MemAvailable: 9436424 kB' 'Buffers: 35452 kB' 'Cached: 4501728 kB' 'SwapCached: 0 kB' 'Active: 414240 kB' 'Inactive: 4237712 kB' 'Active(anon): 126116 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237712 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 68 kB' 'AnonPages: 143920 kB' 'Mapped: 57508 kB' 'Shmem: 2596 kB' 'KReclaimable: 181092 kB' 'Slab: 262140 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 81048 kB' 'KernelStack: 4992 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 376376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.863 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.863 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.863 04:44:29 -- setup/common.sh@33 -- # echo 0 00:04:05.863 04:44:29 -- setup/common.sh@33 -- # return 0 00:04:05.863 nr_hugepages=1024 00:04:05.863 resv_hugepages=0 00:04:05.863 surplus_hugepages=0 00:04:05.863 anon_hugepages=0 00:04:05.863 04:44:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.863 04:44:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.863 04:44:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.863 04:44:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.863 04:44:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.863 04:44:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.863 04:44:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.863 04:44:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.863 04:44:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.863 04:44:29 -- setup/common.sh@18 -- # local node= 00:04:05.863 04:44:29 -- setup/common.sh@19 -- # local var val 00:04:05.863 04:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.863 04:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.863 04:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.864 04:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.864 04:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.864 04:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5051248 kB' 'MemAvailable: 9436424 kB' 'Buffers: 35452 kB' 'Cached: 4501728 kB' 'SwapCached: 0 kB' 'Active: 414348 kB' 'Inactive: 4237712 kB' 'Active(anon): 126224 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237712 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 68 kB' 'AnonPages: 143796 kB' 'Mapped: 57476 kB' 'Shmem: 2596 kB' 'KReclaimable: 181092 kB' 'Slab: 262140 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 81048 kB' 'KernelStack: 4992 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 376376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.864 04:44:29 -- setup/common.sh@33 -- # echo 1024 00:04:05.864 04:44:29 -- setup/common.sh@33 -- # return 0 00:04:05.864 04:44:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.864 04:44:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.864 04:44:29 -- setup/hugepages.sh@27 -- # local node 00:04:05.864 04:44:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.864 04:44:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.864 04:44:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.864 04:44:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.864 04:44:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.864 04:44:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.864 04:44:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.864 04:44:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.864 04:44:29 -- setup/common.sh@18 -- # local node=0 00:04:05.864 04:44:29 -- setup/common.sh@19 -- # local var val 00:04:05.864 04:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.864 04:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.864 04:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.864 04:44:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.864 04:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.864 04:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5051248 kB' 'MemUsed: 7195076 kB' 'SwapCached: 0 kB' 'Active: 414288 kB' 'Inactive: 4237712 kB' 'Active(anon): 126164 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237712 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 0 kB' 'Writeback: 68 kB' 'FilePages: 4537180 kB' 'Mapped: 57476 kB' 'AnonPages: 143676 kB' 'Shmem: 2596 kB' 'KernelStack: 4976 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181092 kB' 'Slab: 262140 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 81048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.864 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.864 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # continue 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.865 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.865 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.865 04:44:29 -- setup/common.sh@33 -- # echo 0 00:04:05.865 04:44:29 -- setup/common.sh@33 -- # return 0 00:04:05.865 04:44:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.865 04:44:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.865 04:44:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.865 04:44:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.865 node0=1024 expecting 1024 00:04:05.865 04:44:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.865 04:44:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.865 04:44:29 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:05.865 04:44:29 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:05.865 04:44:29 -- setup/hugepages.sh@202 -- # setup output 00:04:05.865 04:44:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.865 04:44:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:06.124 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.386 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:06.386 04:44:29 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:06.386 04:44:29 -- setup/hugepages.sh@89 -- # local node 00:04:06.386 04:44:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.386 04:44:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.386 04:44:29 -- setup/hugepages.sh@92 -- # local surp 00:04:06.386 04:44:29 -- setup/hugepages.sh@93 -- # local resv 00:04:06.386 04:44:29 -- setup/hugepages.sh@94 -- # local anon 00:04:06.386 04:44:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.386 04:44:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.386 04:44:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.386 04:44:29 -- setup/common.sh@18 -- # local node= 00:04:06.386 04:44:29 -- setup/common.sh@19 -- # local var val 00:04:06.386 04:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.386 04:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.386 04:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.386 04:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.386 04:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.386 04:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5054520 kB' 'MemAvailable: 9439696 kB' 'Buffers: 35452 kB' 'Cached: 4501728 kB' 'SwapCached: 0 kB' 'Active: 414544 kB' 'Inactive: 4237712 kB' 'Active(anon): 126420 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237712 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 144232 kB' 'Mapped: 57524 kB' 'Shmem: 2596 kB' 'KReclaimable: 181092 kB' 'Slab: 262136 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 81044 kB' 'KernelStack: 5040 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 376376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20136 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.386 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.386 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.387 04:44:29 -- setup/common.sh@33 -- # echo 0 00:04:06.387 04:44:29 -- setup/common.sh@33 -- # return 0 00:04:06.387 04:44:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.387 04:44:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.387 04:44:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.387 04:44:29 -- setup/common.sh@18 -- # local node= 00:04:06.387 04:44:29 -- setup/common.sh@19 -- # local var val 00:04:06.387 04:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.387 04:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.387 04:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.387 04:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.387 04:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.387 04:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5055044 kB' 'MemAvailable: 9440220 kB' 'Buffers: 35452 kB' 'Cached: 4501728 kB' 'SwapCached: 0 kB' 'Active: 414672 kB' 'Inactive: 4237712 kB' 'Active(anon): 126548 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237712 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 143888 kB' 'Mapped: 57484 kB' 'Shmem: 2596 kB' 'KReclaimable: 181092 kB' 'Slab: 262140 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 81048 kB' 'KernelStack: 5056 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 375988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.387 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.387 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.388 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.388 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.388 04:44:29 -- setup/common.sh@33 -- # echo 0 00:04:06.389 04:44:29 -- setup/common.sh@33 -- # return 0 00:04:06.389 04:44:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.389 04:44:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.389 04:44:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.389 04:44:29 -- setup/common.sh@18 -- # local node= 00:04:06.389 04:44:29 -- setup/common.sh@19 -- # local var val 00:04:06.389 04:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.389 04:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.389 04:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.389 04:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.389 04:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.389 04:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5055240 kB' 'MemAvailable: 9440416 kB' 'Buffers: 35452 kB' 'Cached: 4501732 kB' 'SwapCached: 0 kB' 'Active: 414112 kB' 'Inactive: 4237712 kB' 'Active(anon): 125988 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237712 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 143540 kB' 'Mapped: 57476 kB' 'Shmem: 2596 kB' 'KReclaimable: 181092 kB' 'Slab: 262136 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 81044 kB' 'KernelStack: 4960 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 376148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.389 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.389 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.390 04:44:29 -- setup/common.sh@33 -- # echo 0 00:04:06.390 04:44:29 -- setup/common.sh@33 -- # return 0 00:04:06.390 04:44:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.390 nr_hugepages=1024 00:04:06.390 04:44:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.390 resv_hugepages=0 00:04:06.390 surplus_hugepages=0 00:04:06.390 04:44:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.390 04:44:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.390 anon_hugepages=0 00:04:06.390 04:44:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.390 04:44:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.390 04:44:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.390 04:44:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.390 04:44:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.390 04:44:29 -- setup/common.sh@18 -- # local node= 00:04:06.390 04:44:29 -- setup/common.sh@19 -- # local var val 00:04:06.390 04:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.390 04:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.390 04:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.390 04:44:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.390 04:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.390 04:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5055240 kB' 'MemAvailable: 9440420 kB' 'Buffers: 35452 kB' 'Cached: 4501732 kB' 'SwapCached: 0 kB' 'Active: 414092 kB' 'Inactive: 4237716 kB' 'Active(anon): 125968 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237716 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 143564 kB' 'Mapped: 57476 kB' 'Shmem: 2596 kB' 'KReclaimable: 181092 kB' 'Slab: 262136 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 81044 kB' 'KernelStack: 4976 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 376376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.390 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.390 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.391 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.391 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.391 04:44:29 -- setup/common.sh@33 -- # echo 1024 00:04:06.391 04:44:29 -- setup/common.sh@33 -- # return 0 00:04:06.391 04:44:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.391 04:44:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.392 04:44:29 -- setup/hugepages.sh@27 -- # local node 00:04:06.392 04:44:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.392 04:44:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.392 04:44:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.392 04:44:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.392 04:44:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.392 04:44:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.392 04:44:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.392 04:44:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.392 04:44:29 -- setup/common.sh@18 -- # local node=0 00:04:06.392 04:44:29 -- setup/common.sh@19 -- # local var val 00:04:06.392 04:44:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.392 04:44:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.392 04:44:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.392 04:44:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.392 04:44:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.392 04:44:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5055240 kB' 'MemUsed: 7191084 kB' 'SwapCached: 0 kB' 'Active: 414332 kB' 'Inactive: 4237716 kB' 'Active(anon): 126208 kB' 'Inactive(anon): 0 kB' 'Active(file): 288124 kB' 'Inactive(file): 4237716 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4537184 kB' 'Mapped: 57476 kB' 'AnonPages: 143804 kB' 'Shmem: 2596 kB' 'KernelStack: 5008 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181092 kB' 'Slab: 262136 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 81044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.392 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.392 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.393 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.393 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.393 04:44:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.393 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.393 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.393 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.393 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.393 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.393 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.393 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.393 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.393 04:44:29 -- setup/common.sh@32 -- # continue 00:04:06.393 04:44:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.393 04:44:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.393 04:44:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.393 04:44:29 -- setup/common.sh@33 -- # echo 0 00:04:06.393 04:44:29 -- setup/common.sh@33 -- # return 0 00:04:06.393 04:44:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.393 04:44:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.393 04:44:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.393 node0=1024 expecting 1024 00:04:06.393 04:44:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.393 04:44:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.393 04:44:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.393 00:04:06.393 real 0m1.331s 00:04:06.393 user 0m0.499s 00:04:06.393 sys 0m0.885s 00:04:06.393 04:44:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.393 04:44:29 -- common/autotest_common.sh@10 -- # set +x 00:04:06.393 ************************************ 00:04:06.393 END TEST no_shrink_alloc 00:04:06.393 ************************************ 00:04:06.393 04:44:29 -- setup/hugepages.sh@217 -- # clear_hp 00:04:06.393 04:44:29 -- setup/hugepages.sh@37 -- # local node hp 00:04:06.393 04:44:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.393 04:44:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.393 04:44:29 -- setup/hugepages.sh@41 -- # echo 0 00:04:06.393 04:44:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.393 04:44:29 -- setup/hugepages.sh@41 -- # echo 0 00:04:06.393 04:44:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.393 04:44:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.393 00:04:06.393 real 0m5.918s 00:04:06.393 user 0m2.035s 00:04:06.393 sys 0m4.033s 00:04:06.393 04:44:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.393 04:44:29 -- common/autotest_common.sh@10 -- # set +x 00:04:06.393 ************************************ 00:04:06.393 END TEST hugepages 00:04:06.393 ************************************ 00:04:06.393 04:44:29 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:06.393 04:44:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.393 04:44:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.393 04:44:29 -- common/autotest_common.sh@10 -- # set +x 00:04:06.393 ************************************ 00:04:06.393 START TEST driver 00:04:06.393 ************************************ 00:04:06.393 04:44:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:06.652 * Looking for test storage... 00:04:06.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:06.652 04:44:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:06.652 04:44:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:06.652 04:44:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:06.652 04:44:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:06.652 04:44:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:06.652 04:44:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:06.652 04:44:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:06.652 04:44:30 -- scripts/common.sh@335 -- # IFS=.-: 00:04:06.652 04:44:30 -- scripts/common.sh@335 -- # read -ra ver1 00:04:06.652 04:44:30 -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.652 04:44:30 -- scripts/common.sh@336 -- # read -ra ver2 00:04:06.652 04:44:30 -- scripts/common.sh@337 -- # local 'op=<' 00:04:06.652 04:44:30 -- scripts/common.sh@339 -- # ver1_l=2 00:04:06.652 04:44:30 -- scripts/common.sh@340 -- # ver2_l=1 00:04:06.652 04:44:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:06.652 04:44:30 -- scripts/common.sh@343 -- # case "$op" in 00:04:06.652 04:44:30 -- scripts/common.sh@344 -- # : 1 00:04:06.652 04:44:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:06.652 04:44:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.652 04:44:30 -- scripts/common.sh@364 -- # decimal 1 00:04:06.652 04:44:30 -- scripts/common.sh@352 -- # local d=1 00:04:06.652 04:44:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.652 04:44:30 -- scripts/common.sh@354 -- # echo 1 00:04:06.652 04:44:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:06.652 04:44:30 -- scripts/common.sh@365 -- # decimal 2 00:04:06.652 04:44:30 -- scripts/common.sh@352 -- # local d=2 00:04:06.652 04:44:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.652 04:44:30 -- scripts/common.sh@354 -- # echo 2 00:04:06.652 04:44:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:06.652 04:44:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:06.652 04:44:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:06.652 04:44:30 -- scripts/common.sh@367 -- # return 0 00:04:06.652 04:44:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.652 04:44:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:06.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.652 --rc genhtml_branch_coverage=1 00:04:06.652 --rc genhtml_function_coverage=1 00:04:06.652 --rc genhtml_legend=1 00:04:06.652 --rc geninfo_all_blocks=1 00:04:06.652 --rc geninfo_unexecuted_blocks=1 00:04:06.652 00:04:06.652 ' 00:04:06.652 04:44:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:06.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.652 --rc genhtml_branch_coverage=1 00:04:06.652 --rc genhtml_function_coverage=1 00:04:06.652 --rc genhtml_legend=1 00:04:06.652 --rc geninfo_all_blocks=1 00:04:06.652 --rc geninfo_unexecuted_blocks=1 00:04:06.652 00:04:06.652 ' 00:04:06.652 04:44:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:06.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.652 --rc genhtml_branch_coverage=1 00:04:06.652 --rc genhtml_function_coverage=1 00:04:06.652 --rc genhtml_legend=1 00:04:06.652 --rc geninfo_all_blocks=1 00:04:06.652 --rc geninfo_unexecuted_blocks=1 00:04:06.652 00:04:06.652 ' 00:04:06.652 04:44:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:06.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.652 --rc genhtml_branch_coverage=1 00:04:06.652 --rc genhtml_function_coverage=1 00:04:06.652 --rc genhtml_legend=1 00:04:06.652 --rc geninfo_all_blocks=1 00:04:06.652 --rc geninfo_unexecuted_blocks=1 00:04:06.652 00:04:06.652 ' 00:04:06.652 04:44:30 -- setup/driver.sh@68 -- # setup reset 00:04:06.652 04:44:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.652 04:44:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.219 04:44:30 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:07.219 04:44:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.219 04:44:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.219 04:44:30 -- common/autotest_common.sh@10 -- # set +x 00:04:07.219 ************************************ 00:04:07.219 START TEST guess_driver 00:04:07.219 ************************************ 00:04:07.219 04:44:30 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:07.219 04:44:30 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:07.219 04:44:30 -- setup/driver.sh@47 -- # local fail=0 00:04:07.219 04:44:30 -- setup/driver.sh@49 -- # pick_driver 00:04:07.219 04:44:30 -- setup/driver.sh@36 -- # vfio 00:04:07.219 04:44:30 -- setup/driver.sh@21 -- # local iommu_grups 00:04:07.219 04:44:30 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:07.219 04:44:30 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:07.219 04:44:30 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:07.219 04:44:30 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:07.219 04:44:30 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:07.219 04:44:30 -- setup/driver.sh@32 -- # return 1 00:04:07.219 04:44:30 -- setup/driver.sh@38 -- # uio 00:04:07.219 04:44:30 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:07.219 04:44:30 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:07.219 04:44:30 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:07.219 04:44:30 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:07.219 04:44:30 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio.ko.zst 00:04:07.219 insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio_pci_generic.ko.zst == *\.\k\o* ]] 00:04:07.219 04:44:30 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:07.219 04:44:30 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:07.219 04:44:30 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:07.219 Looking for driver=uio_pci_generic 00:04:07.219 04:44:30 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:07.219 04:44:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.219 04:44:30 -- setup/driver.sh@45 -- # setup output config 00:04:07.219 04:44:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.219 04:44:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.478 04:44:30 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:07.478 04:44:30 -- setup/driver.sh@58 -- # continue 00:04:07.478 04:44:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.737 04:44:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.737 04:44:31 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:07.737 04:44:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.305 04:44:31 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:08.305 04:44:31 -- setup/driver.sh@65 -- # setup reset 00:04:08.305 04:44:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.305 04:44:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.873 00:04:08.873 real 0m1.596s 00:04:08.873 user 0m0.355s 00:04:08.873 sys 0m1.285s 00:04:08.873 ************************************ 00:04:08.873 END TEST guess_driver 00:04:08.873 ************************************ 00:04:08.873 04:44:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.873 04:44:32 -- common/autotest_common.sh@10 -- # set +x 00:04:08.873 00:04:08.873 real 0m2.293s 00:04:08.873 user 0m0.636s 00:04:08.873 sys 0m1.770s 00:04:08.873 04:44:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.873 ************************************ 00:04:08.873 END TEST driver 00:04:08.873 04:44:32 -- common/autotest_common.sh@10 -- # set +x 00:04:08.873 ************************************ 00:04:08.873 04:44:32 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:08.873 04:44:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.873 04:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.873 04:44:32 -- common/autotest_common.sh@10 -- # set +x 00:04:08.873 ************************************ 00:04:08.873 START TEST devices 00:04:08.873 ************************************ 00:04:08.873 04:44:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:08.873 * Looking for test storage... 00:04:08.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.873 04:44:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:08.873 04:44:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:08.873 04:44:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:08.873 04:44:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:08.873 04:44:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:08.873 04:44:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:08.873 04:44:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:08.873 04:44:32 -- scripts/common.sh@335 -- # IFS=.-: 00:04:08.873 04:44:32 -- scripts/common.sh@335 -- # read -ra ver1 00:04:08.873 04:44:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.873 04:44:32 -- scripts/common.sh@336 -- # read -ra ver2 00:04:08.873 04:44:32 -- scripts/common.sh@337 -- # local 'op=<' 00:04:08.873 04:44:32 -- scripts/common.sh@339 -- # ver1_l=2 00:04:08.873 04:44:32 -- scripts/common.sh@340 -- # ver2_l=1 00:04:08.873 04:44:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:08.873 04:44:32 -- scripts/common.sh@343 -- # case "$op" in 00:04:08.873 04:44:32 -- scripts/common.sh@344 -- # : 1 00:04:08.873 04:44:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:08.873 04:44:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.873 04:44:32 -- scripts/common.sh@364 -- # decimal 1 00:04:08.873 04:44:32 -- scripts/common.sh@352 -- # local d=1 00:04:08.873 04:44:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.873 04:44:32 -- scripts/common.sh@354 -- # echo 1 00:04:08.873 04:44:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:08.873 04:44:32 -- scripts/common.sh@365 -- # decimal 2 00:04:08.873 04:44:32 -- scripts/common.sh@352 -- # local d=2 00:04:08.873 04:44:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.873 04:44:32 -- scripts/common.sh@354 -- # echo 2 00:04:09.132 04:44:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:09.132 04:44:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:09.132 04:44:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:09.132 04:44:32 -- scripts/common.sh@367 -- # return 0 00:04:09.132 04:44:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.132 04:44:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.132 --rc genhtml_branch_coverage=1 00:04:09.132 --rc genhtml_function_coverage=1 00:04:09.132 --rc genhtml_legend=1 00:04:09.132 --rc geninfo_all_blocks=1 00:04:09.132 --rc geninfo_unexecuted_blocks=1 00:04:09.132 00:04:09.132 ' 00:04:09.132 04:44:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.132 --rc genhtml_branch_coverage=1 00:04:09.132 --rc genhtml_function_coverage=1 00:04:09.132 --rc genhtml_legend=1 00:04:09.132 --rc geninfo_all_blocks=1 00:04:09.132 --rc geninfo_unexecuted_blocks=1 00:04:09.132 00:04:09.132 ' 00:04:09.132 04:44:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.132 --rc genhtml_branch_coverage=1 00:04:09.132 --rc genhtml_function_coverage=1 00:04:09.132 --rc genhtml_legend=1 00:04:09.132 --rc geninfo_all_blocks=1 00:04:09.132 --rc geninfo_unexecuted_blocks=1 00:04:09.132 00:04:09.132 ' 00:04:09.132 04:44:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.132 --rc genhtml_branch_coverage=1 00:04:09.132 --rc genhtml_function_coverage=1 00:04:09.132 --rc genhtml_legend=1 00:04:09.132 --rc geninfo_all_blocks=1 00:04:09.132 --rc geninfo_unexecuted_blocks=1 00:04:09.132 00:04:09.132 ' 00:04:09.132 04:44:32 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:09.132 04:44:32 -- setup/devices.sh@192 -- # setup reset 00:04:09.132 04:44:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.132 04:44:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.391 04:44:32 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:09.391 04:44:32 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:09.391 04:44:32 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:09.391 04:44:32 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:09.391 04:44:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.391 04:44:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:09.391 04:44:32 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:09.391 04:44:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.391 04:44:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.391 04:44:32 -- setup/devices.sh@196 -- # blocks=() 00:04:09.391 04:44:32 -- setup/devices.sh@196 -- # declare -a blocks 00:04:09.391 04:44:32 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:09.391 04:44:32 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:09.391 04:44:32 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:09.392 04:44:32 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.392 04:44:32 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:09.392 04:44:32 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:09.392 04:44:32 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:09.392 04:44:32 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:09.392 04:44:32 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:09.392 04:44:32 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:09.392 04:44:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:09.392 No valid GPT data, bailing 00:04:09.392 04:44:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:09.651 04:44:32 -- scripts/common.sh@393 -- # pt= 00:04:09.651 04:44:32 -- scripts/common.sh@394 -- # return 1 00:04:09.651 04:44:32 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:09.651 04:44:32 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:09.651 04:44:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:09.651 04:44:32 -- setup/common.sh@80 -- # echo 5368709120 00:04:09.651 04:44:32 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:09.651 04:44:32 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.651 04:44:32 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:09.651 04:44:32 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:09.651 04:44:32 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:09.651 04:44:32 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:09.651 04:44:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.651 04:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.651 04:44:32 -- common/autotest_common.sh@10 -- # set +x 00:04:09.651 ************************************ 00:04:09.651 START TEST nvme_mount 00:04:09.651 ************************************ 00:04:09.651 04:44:32 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:09.651 04:44:32 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:09.651 04:44:32 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:09.651 04:44:32 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.651 04:44:32 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.651 04:44:32 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:09.651 04:44:32 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:09.651 04:44:32 -- setup/common.sh@40 -- # local part_no=1 00:04:09.651 04:44:32 -- setup/common.sh@41 -- # local size=1073741824 00:04:09.651 04:44:32 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.651 04:44:32 -- setup/common.sh@44 -- # parts=() 00:04:09.651 04:44:32 -- setup/common.sh@44 -- # local parts 00:04:09.651 04:44:32 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.651 04:44:32 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.651 04:44:32 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.651 04:44:32 -- setup/common.sh@46 -- # (( part++ )) 00:04:09.651 04:44:32 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.651 04:44:32 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:09.651 04:44:32 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:09.651 04:44:32 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:10.588 Creating new GPT entries in memory. 00:04:10.588 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.588 other utilities. 00:04:10.588 04:44:33 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.588 04:44:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.588 04:44:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.588 04:44:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.588 04:44:33 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:11.526 Creating new GPT entries in memory. 00:04:11.526 The operation has completed successfully. 00:04:11.526 04:44:34 -- setup/common.sh@57 -- # (( part++ )) 00:04:11.526 04:44:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.526 04:44:34 -- setup/common.sh@62 -- # wait 55341 00:04:11.526 04:44:35 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.526 04:44:35 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:11.526 04:44:35 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.526 04:44:35 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:11.526 04:44:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:11.526 04:44:35 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.526 04:44:35 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:11.526 04:44:35 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:11.526 04:44:35 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:11.526 04:44:35 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.526 04:44:35 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:11.526 04:44:35 -- setup/devices.sh@53 -- # local found=0 00:04:11.526 04:44:35 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:11.526 04:44:35 -- setup/devices.sh@56 -- # : 00:04:11.526 04:44:35 -- setup/devices.sh@59 -- # local pci status 00:04:11.526 04:44:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.526 04:44:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:11.526 04:44:35 -- setup/devices.sh@47 -- # setup output config 00:04:11.526 04:44:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.526 04:44:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:11.785 04:44:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:11.785 04:44:35 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:11.785 04:44:35 -- setup/devices.sh@63 -- # found=1 00:04:11.785 04:44:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.785 04:44:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:11.785 04:44:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.044 04:44:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:12.044 04:44:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.612 04:44:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.612 04:44:35 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:12.612 04:44:35 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.612 04:44:35 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.612 04:44:35 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.612 04:44:35 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:12.612 04:44:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.612 04:44:35 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.612 04:44:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.612 04:44:35 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:12.612 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.612 04:44:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.612 04:44:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.871 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:12.871 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:12.871 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:12.871 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:12.871 04:44:36 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:12.871 04:44:36 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:12.871 04:44:36 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.871 04:44:36 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:12.871 04:44:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:12.871 04:44:36 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.871 04:44:36 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.871 04:44:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:12.871 04:44:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:12.871 04:44:36 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.871 04:44:36 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.871 04:44:36 -- setup/devices.sh@53 -- # local found=0 00:04:12.871 04:44:36 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.871 04:44:36 -- setup/devices.sh@56 -- # : 00:04:12.871 04:44:36 -- setup/devices.sh@59 -- # local pci status 00:04:12.871 04:44:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.871 04:44:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:12.871 04:44:36 -- setup/devices.sh@47 -- # setup output config 00:04:12.871 04:44:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.871 04:44:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.130 04:44:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:13.130 04:44:36 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:13.130 04:44:36 -- setup/devices.sh@63 -- # found=1 00:04:13.130 04:44:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.130 04:44:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:13.130 04:44:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.130 04:44:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:13.130 04:44:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.072 04:44:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.072 04:44:37 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:14.072 04:44:37 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.072 04:44:37 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.072 04:44:37 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:14.072 04:44:37 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.072 04:44:37 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:14.072 04:44:37 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:14.072 04:44:37 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:14.072 04:44:37 -- setup/devices.sh@50 -- # local mount_point= 00:04:14.072 04:44:37 -- setup/devices.sh@51 -- # local test_file= 00:04:14.072 04:44:37 -- setup/devices.sh@53 -- # local found=0 00:04:14.072 04:44:37 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:14.072 04:44:37 -- setup/devices.sh@59 -- # local pci status 00:04:14.072 04:44:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.072 04:44:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:14.072 04:44:37 -- setup/devices.sh@47 -- # setup output config 00:04:14.072 04:44:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.072 04:44:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.072 04:44:37 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.072 04:44:37 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:14.072 04:44:37 -- setup/devices.sh@63 -- # found=1 00:04:14.072 04:44:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.072 04:44:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.072 04:44:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.332 04:44:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.332 04:44:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.902 04:44:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.902 04:44:38 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:14.902 04:44:38 -- setup/devices.sh@68 -- # return 0 00:04:14.902 04:44:38 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:14.902 04:44:38 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.902 04:44:38 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:14.902 04:44:38 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:14.902 04:44:38 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:14.902 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:14.902 00:04:14.902 real 0m5.280s 00:04:14.902 user 0m0.530s 00:04:14.902 sys 0m2.550s 00:04:14.902 04:44:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.902 04:44:38 -- common/autotest_common.sh@10 -- # set +x 00:04:14.902 ************************************ 00:04:14.902 END TEST nvme_mount 00:04:14.902 ************************************ 00:04:14.902 04:44:38 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:14.902 04:44:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.902 04:44:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.902 04:44:38 -- common/autotest_common.sh@10 -- # set +x 00:04:14.902 ************************************ 00:04:14.902 START TEST dm_mount 00:04:14.902 ************************************ 00:04:14.902 04:44:38 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:14.902 04:44:38 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:14.902 04:44:38 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:14.902 04:44:38 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:14.902 04:44:38 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:14.902 04:44:38 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:14.902 04:44:38 -- setup/common.sh@40 -- # local part_no=2 00:04:14.902 04:44:38 -- setup/common.sh@41 -- # local size=1073741824 00:04:14.902 04:44:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:14.902 04:44:38 -- setup/common.sh@44 -- # parts=() 00:04:14.902 04:44:38 -- setup/common.sh@44 -- # local parts 00:04:14.902 04:44:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:14.902 04:44:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.902 04:44:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:14.902 04:44:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:14.902 04:44:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.902 04:44:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:14.902 04:44:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:14.902 04:44:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.902 04:44:38 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:14.902 04:44:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:14.902 04:44:38 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:15.851 Creating new GPT entries in memory. 00:04:15.851 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:15.851 other utilities. 00:04:15.851 04:44:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:15.851 04:44:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.851 04:44:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.851 04:44:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.851 04:44:39 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:16.797 Creating new GPT entries in memory. 00:04:16.797 The operation has completed successfully. 00:04:16.797 04:44:40 -- setup/common.sh@57 -- # (( part++ )) 00:04:16.797 04:44:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.797 04:44:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.797 04:44:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.797 04:44:40 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:18.174 The operation has completed successfully. 00:04:18.174 04:44:41 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.174 04:44:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.174 04:44:41 -- setup/common.sh@62 -- # wait 55766 00:04:18.174 04:44:41 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:18.174 04:44:41 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.174 04:44:41 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:18.174 04:44:41 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:18.174 04:44:41 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:18.174 04:44:41 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.174 04:44:41 -- setup/devices.sh@161 -- # break 00:04:18.174 04:44:41 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.174 04:44:41 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:18.174 04:44:41 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:18.174 04:44:41 -- setup/devices.sh@166 -- # dm=dm-0 00:04:18.174 04:44:41 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:18.174 04:44:41 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:18.174 04:44:41 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.174 04:44:41 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:18.174 04:44:41 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.174 04:44:41 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.174 04:44:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:18.174 04:44:41 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.174 04:44:41 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:18.174 04:44:41 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:18.174 04:44:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:18.174 04:44:41 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.174 04:44:41 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:18.174 04:44:41 -- setup/devices.sh@53 -- # local found=0 00:04:18.174 04:44:41 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:18.174 04:44:41 -- setup/devices.sh@56 -- # : 00:04:18.174 04:44:41 -- setup/devices.sh@59 -- # local pci status 00:04:18.174 04:44:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.174 04:44:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:18.174 04:44:41 -- setup/devices.sh@47 -- # setup output config 00:04:18.174 04:44:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.174 04:44:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.174 04:44:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.174 04:44:41 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:18.174 04:44:41 -- setup/devices.sh@63 -- # found=1 00:04:18.174 04:44:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.174 04:44:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.174 04:44:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.439 04:44:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.439 04:44:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.028 04:44:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.028 04:44:42 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:19.028 04:44:42 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.028 04:44:42 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.028 04:44:42 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:19.028 04:44:42 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.028 04:44:42 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:19.028 04:44:42 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:19.028 04:44:42 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:19.028 04:44:42 -- setup/devices.sh@50 -- # local mount_point= 00:04:19.028 04:44:42 -- setup/devices.sh@51 -- # local test_file= 00:04:19.028 04:44:42 -- setup/devices.sh@53 -- # local found=0 00:04:19.028 04:44:42 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.028 04:44:42 -- setup/devices.sh@59 -- # local pci status 00:04:19.028 04:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.028 04:44:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:19.028 04:44:42 -- setup/devices.sh@47 -- # setup output config 00:04:19.028 04:44:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.028 04:44:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.028 04:44:42 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.028 04:44:42 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:19.028 04:44:42 -- setup/devices.sh@63 -- # found=1 00:04:19.028 04:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.028 04:44:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.028 04:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.288 04:44:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.288 04:44:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.856 04:44:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.856 04:44:43 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:19.856 04:44:43 -- setup/devices.sh@68 -- # return 0 00:04:19.856 04:44:43 -- setup/devices.sh@187 -- # cleanup_dm 00:04:19.856 04:44:43 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.856 04:44:43 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:19.856 04:44:43 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:19.856 04:44:43 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.856 04:44:43 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:19.856 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:19.856 04:44:43 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:19.856 04:44:43 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:19.856 00:04:19.856 real 0m4.991s 00:04:19.856 user 0m0.308s 00:04:19.856 sys 0m1.663s 00:04:19.856 04:44:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.856 04:44:43 -- common/autotest_common.sh@10 -- # set +x 00:04:19.856 ************************************ 00:04:19.856 END TEST dm_mount 00:04:19.856 ************************************ 00:04:19.856 04:44:43 -- setup/devices.sh@1 -- # cleanup 00:04:19.856 04:44:43 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:19.856 04:44:43 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.856 04:44:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.856 04:44:43 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:19.856 04:44:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.856 04:44:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.115 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.115 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.115 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.115 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.116 04:44:43 -- setup/devices.sh@12 -- # cleanup_dm 00:04:20.116 04:44:43 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:20.116 04:44:43 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:20.116 04:44:43 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.116 04:44:43 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:20.116 04:44:43 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.116 04:44:43 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:20.116 ************************************ 00:04:20.116 END TEST devices 00:04:20.116 ************************************ 00:04:20.116 00:04:20.116 real 0m11.359s 00:04:20.116 user 0m1.227s 00:04:20.116 sys 0m4.683s 00:04:20.116 04:44:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.116 04:44:43 -- common/autotest_common.sh@10 -- # set +x 00:04:20.116 00:04:20.116 real 0m24.028s 00:04:20.116 user 0m5.277s 00:04:20.116 sys 0m13.741s 00:04:20.116 04:44:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.116 04:44:43 -- common/autotest_common.sh@10 -- # set +x 00:04:20.116 ************************************ 00:04:20.116 END TEST setup.sh 00:04:20.116 ************************************ 00:04:20.375 04:44:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:20.375 Hugepages 00:04:20.375 node hugesize free / total 00:04:20.375 node0 1048576kB 0 / 0 00:04:20.375 node0 2048kB 2048 / 2048 00:04:20.375 00:04:20.375 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.634 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:20.634 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:20.634 04:44:43 -- spdk/autotest.sh@128 -- # uname -s 00:04:20.634 04:44:44 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:20.634 04:44:44 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:20.634 04:44:44 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:20.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:21.152 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.720 04:44:45 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:22.657 04:44:46 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:22.657 04:44:46 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:22.657 04:44:46 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:22.657 04:44:46 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:22.657 04:44:46 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:22.657 04:44:46 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:22.657 04:44:46 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.657 04:44:46 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.657 04:44:46 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:22.657 04:44:46 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:22.657 04:44:46 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:04:22.657 04:44:46 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:22.915 Waiting for block devices as requested 00:04:23.174 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.174 04:44:46 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:23.174 04:44:46 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:23.174 04:44:46 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:04:23.174 04:44:46 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:23.174 04:44:46 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:23.174 04:44:46 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:23.174 04:44:46 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:23.174 04:44:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:23.174 04:44:46 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:23.174 04:44:46 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:23.174 04:44:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:23.174 04:44:46 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:23.174 04:44:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.174 04:44:46 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:23.174 04:44:46 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:23.174 04:44:46 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:23.174 04:44:46 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:23.174 04:44:46 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:23.174 04:44:46 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:23.174 04:44:46 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:23.174 04:44:46 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:23.174 04:44:46 -- common/autotest_common.sh@1552 -- # continue 00:04:23.174 04:44:46 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:23.174 04:44:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:23.174 04:44:46 -- common/autotest_common.sh@10 -- # set +x 00:04:23.174 04:44:46 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:23.174 04:44:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:23.174 04:44:46 -- common/autotest_common.sh@10 -- # set +x 00:04:23.174 04:44:46 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:23.742 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.310 04:44:47 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:24.310 04:44:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.310 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:04:24.310 04:44:47 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:24.311 04:44:47 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:24.311 04:44:47 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.311 04:44:47 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:24.311 04:44:47 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:24.311 04:44:47 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:24.311 04:44:47 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:24.311 04:44:47 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:24.311 04:44:47 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.311 04:44:47 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:24.311 04:44:47 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:24.311 04:44:47 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:24.311 04:44:47 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:04:24.311 04:44:47 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:24.311 04:44:47 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:24.311 04:44:47 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:24.311 04:44:47 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.311 04:44:47 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:24.311 04:44:47 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:24.311 04:44:47 -- common/autotest_common.sh@1588 -- # return 0 00:04:24.311 04:44:47 -- spdk/autotest.sh@148 -- # '[' 1 -eq 1 ']' 00:04:24.311 04:44:47 -- spdk/autotest.sh@149 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:24.311 04:44:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.311 04:44:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.311 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:04:24.311 ************************************ 00:04:24.311 START TEST unittest 00:04:24.311 ************************************ 00:04:24.311 04:44:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:24.311 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:24.311 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:24.311 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:24.311 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:24.311 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:24.311 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:24.311 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:24.311 ++ rpc_py=rpc_cmd 00:04:24.311 ++ set -e 00:04:24.311 ++ shopt -s nullglob 00:04:24.311 ++ shopt -s extglob 00:04:24.311 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:24.311 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:24.311 +++ CONFIG_WPDK_DIR= 00:04:24.311 +++ CONFIG_ASAN=y 00:04:24.311 +++ CONFIG_VBDEV_COMPRESS=n 00:04:24.311 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:24.311 +++ CONFIG_USDT=n 00:04:24.311 +++ CONFIG_CUSTOMOCF=n 00:04:24.311 +++ CONFIG_PREFIX=/usr/local 00:04:24.311 +++ CONFIG_RBD=n 00:04:24.311 +++ CONFIG_LIBDIR= 00:04:24.311 +++ CONFIG_IDXD=y 00:04:24.311 +++ CONFIG_NVME_CUSE=y 00:04:24.311 +++ CONFIG_SMA=n 00:04:24.311 +++ CONFIG_VTUNE=n 00:04:24.311 +++ CONFIG_TSAN=n 00:04:24.311 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:24.311 +++ CONFIG_VFIO_USER_DIR= 00:04:24.311 +++ CONFIG_PGO_CAPTURE=n 00:04:24.311 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:04:24.311 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:24.311 +++ CONFIG_LTO=n 00:04:24.311 +++ CONFIG_ISCSI_INITIATOR=y 00:04:24.311 +++ CONFIG_CET=n 00:04:24.311 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:24.311 +++ CONFIG_OCF_PATH= 00:04:24.311 +++ CONFIG_RDMA_SET_TOS=y 00:04:24.311 +++ CONFIG_HAVE_ARC4RANDOM=y 00:04:24.311 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:24.311 +++ CONFIG_UBLK=y 00:04:24.311 +++ CONFIG_ISAL_CRYPTO=y 00:04:24.311 +++ CONFIG_OPENSSL_PATH= 00:04:24.311 +++ CONFIG_OCF=n 00:04:24.311 +++ CONFIG_FUSE=n 00:04:24.311 +++ CONFIG_VTUNE_DIR= 00:04:24.311 +++ CONFIG_FUZZER_LIB= 00:04:24.311 +++ CONFIG_FUZZER=n 00:04:24.311 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:24.311 +++ CONFIG_CRYPTO=n 00:04:24.311 +++ CONFIG_PGO_USE=n 00:04:24.311 +++ CONFIG_VHOST=y 00:04:24.311 +++ CONFIG_DAOS=n 00:04:24.311 +++ CONFIG_DPDK_INC_DIR= 00:04:24.311 +++ CONFIG_DAOS_DIR= 00:04:24.311 +++ CONFIG_UNIT_TESTS=y 00:04:24.311 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:04:24.311 +++ CONFIG_VIRTIO=y 00:04:24.311 +++ CONFIG_COVERAGE=y 00:04:24.311 +++ CONFIG_RDMA=y 00:04:24.311 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:24.311 +++ CONFIG_URING_PATH= 00:04:24.311 +++ CONFIG_XNVME=n 00:04:24.311 +++ CONFIG_VFIO_USER=n 00:04:24.311 +++ CONFIG_ARCH=native 00:04:24.311 +++ CONFIG_URING_ZNS=n 00:04:24.311 +++ CONFIG_WERROR=y 00:04:24.311 +++ CONFIG_HAVE_LIBBSD=n 00:04:24.311 +++ CONFIG_UBSAN=y 00:04:24.311 +++ CONFIG_IPSEC_MB_DIR= 00:04:24.311 +++ CONFIG_GOLANG=n 00:04:24.311 +++ CONFIG_ISAL=y 00:04:24.311 +++ CONFIG_IDXD_KERNEL=y 00:04:24.311 +++ CONFIG_DPDK_LIB_DIR= 00:04:24.311 +++ CONFIG_RDMA_PROV=verbs 00:04:24.311 +++ CONFIG_APPS=y 00:04:24.311 +++ CONFIG_SHARED=n 00:04:24.311 +++ CONFIG_FC_PATH= 00:04:24.311 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:24.311 +++ CONFIG_FC=n 00:04:24.311 +++ CONFIG_AVAHI=n 00:04:24.311 +++ CONFIG_FIO_PLUGIN=y 00:04:24.311 +++ CONFIG_RAID5F=y 00:04:24.311 +++ CONFIG_EXAMPLES=y 00:04:24.311 +++ CONFIG_TESTS=y 00:04:24.311 +++ CONFIG_CRYPTO_MLX5=n 00:04:24.311 +++ CONFIG_MAX_LCORES= 00:04:24.311 +++ CONFIG_IPSEC_MB=n 00:04:24.311 +++ CONFIG_DEBUG=y 00:04:24.311 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:24.311 +++ CONFIG_CROSS_PREFIX= 00:04:24.311 +++ CONFIG_URING=n 00:04:24.311 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:24.311 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:24.311 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:24.311 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:24.311 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:24.311 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:24.311 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:24.311 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:24.311 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:24.311 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:24.311 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:24.311 +++ VHOST_APP=("$_app_dir/vhost") 00:04:24.311 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:24.311 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:24.311 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:24.311 +++ [[ #ifndef SPDK_CONFIG_H 00:04:24.311 #define SPDK_CONFIG_H 00:04:24.311 #define SPDK_CONFIG_APPS 1 00:04:24.311 #define SPDK_CONFIG_ARCH native 00:04:24.311 #define SPDK_CONFIG_ASAN 1 00:04:24.311 #undef SPDK_CONFIG_AVAHI 00:04:24.311 #undef SPDK_CONFIG_CET 00:04:24.311 #define SPDK_CONFIG_COVERAGE 1 00:04:24.311 #define SPDK_CONFIG_CROSS_PREFIX 00:04:24.311 #undef SPDK_CONFIG_CRYPTO 00:04:24.311 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:24.311 #undef SPDK_CONFIG_CUSTOMOCF 00:04:24.311 #undef SPDK_CONFIG_DAOS 00:04:24.311 #define SPDK_CONFIG_DAOS_DIR 00:04:24.311 #define SPDK_CONFIG_DEBUG 1 00:04:24.311 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:24.311 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:24.311 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:24.311 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:24.311 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:24.311 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:24.311 #define SPDK_CONFIG_EXAMPLES 1 00:04:24.311 #undef SPDK_CONFIG_FC 00:04:24.311 #define SPDK_CONFIG_FC_PATH 00:04:24.311 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:24.311 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:24.311 #undef SPDK_CONFIG_FUSE 00:04:24.311 #undef SPDK_CONFIG_FUZZER 00:04:24.311 #define SPDK_CONFIG_FUZZER_LIB 00:04:24.311 #undef SPDK_CONFIG_GOLANG 00:04:24.311 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:04:24.311 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:24.311 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:24.311 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:24.311 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:04:24.311 #define SPDK_CONFIG_IDXD 1 00:04:24.311 #define SPDK_CONFIG_IDXD_KERNEL 1 00:04:24.311 #undef SPDK_CONFIG_IPSEC_MB 00:04:24.311 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:24.311 #define SPDK_CONFIG_ISAL 1 00:04:24.311 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:04:24.311 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:04:24.311 #define SPDK_CONFIG_LIBDIR 00:04:24.311 #undef SPDK_CONFIG_LTO 00:04:24.311 #define SPDK_CONFIG_MAX_LCORES 00:04:24.311 #define SPDK_CONFIG_NVME_CUSE 1 00:04:24.311 #undef SPDK_CONFIG_OCF 00:04:24.311 #define SPDK_CONFIG_OCF_PATH 00:04:24.311 #define SPDK_CONFIG_OPENSSL_PATH 00:04:24.311 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:24.311 #undef SPDK_CONFIG_PGO_USE 00:04:24.311 #define SPDK_CONFIG_PREFIX /usr/local 00:04:24.311 #define SPDK_CONFIG_RAID5F 1 00:04:24.311 #undef SPDK_CONFIG_RBD 00:04:24.311 #define SPDK_CONFIG_RDMA 1 00:04:24.311 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:24.311 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:24.311 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:04:24.311 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:24.311 #undef SPDK_CONFIG_SHARED 00:04:24.311 #undef SPDK_CONFIG_SMA 00:04:24.311 #define SPDK_CONFIG_TESTS 1 00:04:24.311 #undef SPDK_CONFIG_TSAN 00:04:24.311 #define SPDK_CONFIG_UBLK 1 00:04:24.311 #define SPDK_CONFIG_UBSAN 1 00:04:24.311 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:24.311 #undef SPDK_CONFIG_URING 00:04:24.311 #define SPDK_CONFIG_URING_PATH 00:04:24.311 #undef SPDK_CONFIG_URING_ZNS 00:04:24.311 #undef SPDK_CONFIG_USDT 00:04:24.311 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:24.311 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:24.311 #undef SPDK_CONFIG_VFIO_USER 00:04:24.312 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:24.312 #define SPDK_CONFIG_VHOST 1 00:04:24.312 #define SPDK_CONFIG_VIRTIO 1 00:04:24.312 #undef SPDK_CONFIG_VTUNE 00:04:24.312 #define SPDK_CONFIG_VTUNE_DIR 00:04:24.312 #define SPDK_CONFIG_WERROR 1 00:04:24.312 #define SPDK_CONFIG_WPDK_DIR 00:04:24.312 #undef SPDK_CONFIG_XNVME 00:04:24.312 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:24.312 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:24.312 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.312 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:24.312 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.312 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.312 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:24.312 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:24.312 ++++ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:24.312 ++++ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:24.312 ++++ export PATH 00:04:24.312 ++++ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:24.312 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:24.312 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:24.312 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:24.312 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:24.312 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:24.312 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:24.312 +++ TEST_TAG=N/A 00:04:24.312 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:24.312 ++ : 1 00:04:24.312 ++ export RUN_NIGHTLY 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_RUN_VALGRIND 00:04:24.312 ++ : 1 00:04:24.312 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:24.312 ++ : 1 00:04:24.312 ++ export SPDK_TEST_UNITTEST 00:04:24.312 ++ : 00:04:24.312 ++ export SPDK_TEST_AUTOBUILD 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_RELEASE_BUILD 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_ISAL 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_ISCSI 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:24.312 ++ : 1 00:04:24.312 ++ export SPDK_TEST_NVME 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_NVME_PMR 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_NVME_BP 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_NVME_CLI 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_NVME_CUSE 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_NVME_FDP 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_NVMF 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_VFIOUSER 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_FUZZER 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_FUZZER_SHORT 00:04:24.312 ++ : rdma 00:04:24.312 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_RBD 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_VHOST 00:04:24.312 ++ : 1 00:04:24.312 ++ export SPDK_TEST_BLOCKDEV 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_IOAT 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_BLOBFS 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_VHOST_INIT 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_LVOL 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:24.312 ++ : 1 00:04:24.312 ++ export SPDK_RUN_ASAN 00:04:24.312 ++ : 1 00:04:24.312 ++ export SPDK_RUN_UBSAN 00:04:24.312 ++ : 00:04:24.312 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_RUN_NON_ROOT 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_CRYPTO 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_FTL 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_OCF 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_VMD 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_OPAL 00:04:24.312 ++ : 00:04:24.312 ++ export SPDK_TEST_NATIVE_DPDK 00:04:24.312 ++ : true 00:04:24.312 ++ export SPDK_AUTOTEST_X 00:04:24.312 ++ : 1 00:04:24.312 ++ export SPDK_TEST_RAID5 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_URING 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_USDT 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_USE_IGB_UIO 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_SCHEDULER 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_SCANBUILD 00:04:24.312 ++ : 00:04:24.312 ++ export SPDK_TEST_NVMF_NICS 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_SMA 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_DAOS 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_XNVME 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_ACCEL_DSA 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_ACCEL_IAA 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_ACCEL_IOAT 00:04:24.312 ++ : 00:04:24.312 ++ export SPDK_TEST_FUZZER_TARGET 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_TEST_NVMF_MDNS 00:04:24.312 ++ : 0 00:04:24.312 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:24.312 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:24.312 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:24.312 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:24.312 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:24.312 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:24.312 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:24.312 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:24.312 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:24.312 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:24.312 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:24.312 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:24.312 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:24.312 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:24.312 ++ PYTHONDONTWRITEBYTECODE=1 00:04:24.312 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:24.312 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:24.312 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:24.312 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:24.312 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:24.312 ++ rm -rf /var/tmp/asan_suppression_file 00:04:24.312 ++ cat 00:04:24.312 ++ echo leak:libfuse3.so 00:04:24.312 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:24.312 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:24.312 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:24.312 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:24.312 ++ '[' -z /var/spdk/dependencies ']' 00:04:24.312 ++ export DEPENDENCY_DIR 00:04:24.312 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:24.312 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:24.312 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:24.312 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:24.312 ++ export QEMU_BIN= 00:04:24.312 ++ QEMU_BIN= 00:04:24.312 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:24.312 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:24.312 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:24.312 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:24.312 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:24.312 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:24.312 ++ _LCOV_MAIN=0 00:04:24.312 ++ _LCOV_LLVM=1 00:04:24.312 ++ _LCOV= 00:04:24.312 ++ [[ '' == *clang* ]] 00:04:24.312 ++ [[ 0 -eq 1 ]] 00:04:24.312 ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:04:24.312 ++ _lcov_opt[_LCOV_MAIN]= 00:04:24.312 ++ lcov_opt= 00:04:24.312 ++ '[' 0 -eq 0 ']' 00:04:24.312 ++ export valgrind= 00:04:24.312 ++ valgrind= 00:04:24.312 +++ uname -s 00:04:24.312 ++ '[' Linux = Linux ']' 00:04:24.312 ++ HUGEMEM=4096 00:04:24.312 ++ export CLEAR_HUGE=yes 00:04:24.313 ++ CLEAR_HUGE=yes 00:04:24.313 ++ [[ 0 -eq 1 ]] 00:04:24.313 ++ [[ 0 -eq 1 ]] 00:04:24.313 ++ MAKE=make 00:04:24.313 +++ nproc 00:04:24.313 ++ MAKEFLAGS=-j10 00:04:24.313 ++ export HUGEMEM=4096 00:04:24.313 ++ HUGEMEM=4096 00:04:24.313 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:24.313 ++ NO_HUGE=() 00:04:24.313 ++ TEST_MODE= 00:04:24.313 ++ [[ -z '' ]] 00:04:24.313 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:24.313 ++ exec 00:04:24.313 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:24.313 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:24.313 ++ set_test_storage 2147483648 00:04:24.313 ++ [[ -v testdir ]] 00:04:24.313 ++ local requested_size=2147483648 00:04:24.313 ++ local mount target_dir 00:04:24.313 ++ local -A mounts fss sizes avails uses 00:04:24.313 ++ local source fs size avail mount use 00:04:24.313 ++ local storage_fallback storage_candidates 00:04:24.313 +++ mktemp -udt spdk.XXXXXX 00:04:24.313 ++ storage_fallback=/tmp/spdk.rDkmzT 00:04:24.313 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:24.313 ++ [[ -n '' ]] 00:04:24.313 ++ [[ -n '' ]] 00:04:24.313 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.rDkmzT/tests/unit /tmp/spdk.rDkmzT 00:04:24.313 ++ requested_size=2214592512 00:04:24.313 ++ read -r source fs size use avail _ mount 00:04:24.313 +++ df -T 00:04:24.313 +++ grep -v Filesystem 00:04:24.313 ++ mounts["$mount"]=tmpfs 00:04:24.313 ++ fss["$mount"]=tmpfs 00:04:24.313 ++ avails["$mount"]=1252954112 00:04:24.313 ++ sizes["$mount"]=1254027264 00:04:24.313 ++ uses["$mount"]=1073152 00:04:24.313 ++ read -r source fs size use avail _ mount 00:04:24.313 ++ mounts["$mount"]=/dev/vda1 00:04:24.313 ++ fss["$mount"]=ext4 00:04:24.313 ++ avails["$mount"]=10282184704 00:04:24.313 ++ sizes["$mount"]=19681529856 00:04:24.313 ++ uses["$mount"]=9382567936 00:04:24.313 ++ read -r source fs size use avail _ mount 00:04:24.313 ++ mounts["$mount"]=tmpfs 00:04:24.313 ++ fss["$mount"]=tmpfs 00:04:24.313 ++ avails["$mount"]=6270115840 00:04:24.313 ++ sizes["$mount"]=6270115840 00:04:24.313 ++ uses["$mount"]=0 00:04:24.313 ++ read -r source fs size use avail _ mount 00:04:24.313 ++ mounts["$mount"]=tmpfs 00:04:24.313 ++ fss["$mount"]=tmpfs 00:04:24.313 ++ avails["$mount"]=5242880 00:04:24.313 ++ sizes["$mount"]=5242880 00:04:24.313 ++ uses["$mount"]=0 00:04:24.313 ++ read -r source fs size use avail _ mount 00:04:24.313 ++ mounts["$mount"]=/dev/vda16 00:04:24.313 ++ fss["$mount"]=ext4 00:04:24.313 ++ avails["$mount"]=777306112 00:04:24.313 ++ sizes["$mount"]=923156480 00:04:24.313 ++ uses["$mount"]=81207296 00:04:24.313 ++ read -r source fs size use avail _ mount 00:04:24.313 ++ mounts["$mount"]=/dev/vda15 00:04:24.313 ++ fss["$mount"]=vfat 00:04:24.313 ++ avails["$mount"]=103000064 00:04:24.313 ++ sizes["$mount"]=109395968 00:04:24.313 ++ uses["$mount"]=6395904 00:04:24.313 ++ read -r source fs size use avail _ mount 00:04:24.313 ++ mounts["$mount"]=tmpfs 00:04:24.313 ++ fss["$mount"]=tmpfs 00:04:24.313 ++ avails["$mount"]=1254010880 00:04:24.313 ++ sizes["$mount"]=1254023168 00:04:24.313 ++ uses["$mount"]=12288 00:04:24.313 ++ read -r source fs size use avail _ mount 00:04:24.313 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:04:24.313 ++ fss["$mount"]=fuse.sshfs 00:04:24.313 ++ avails["$mount"]=98011815936 00:04:24.313 ++ sizes["$mount"]=105088212992 00:04:24.313 ++ uses["$mount"]=1690963968 00:04:24.313 ++ read -r source fs size use avail _ mount 00:04:24.313 ++ printf '* Looking for test storage...\n' 00:04:24.313 * Looking for test storage... 00:04:24.313 ++ local target_space new_size 00:04:24.313 ++ for target_dir in "${storage_candidates[@]}" 00:04:24.313 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:24.313 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:24.572 ++ mount=/ 00:04:24.572 ++ target_space=10282184704 00:04:24.572 ++ (( target_space == 0 || target_space < requested_size )) 00:04:24.572 ++ (( target_space >= requested_size )) 00:04:24.572 ++ [[ ext4 == tmpfs ]] 00:04:24.572 ++ [[ ext4 == ramfs ]] 00:04:24.572 ++ [[ / == / ]] 00:04:24.572 ++ new_size=11597160448 00:04:24.572 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:24.572 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:24.572 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:24.572 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:24.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:24.572 ++ return 0 00:04:24.572 ++ set -o errtrace 00:04:24.572 ++ shopt -s extdebug 00:04:24.572 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:24.572 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:24.572 04:44:47 -- common/autotest_common.sh@1682 -- # true 00:04:24.572 04:44:47 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:04:24.572 04:44:47 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:24.572 04:44:47 -- common/autotest_common.sh@29 -- # exec 00:04:24.572 04:44:47 -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:24.572 04:44:47 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:24.572 04:44:47 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:24.572 04:44:47 -- common/autotest_common.sh@18 -- # set -x 00:04:24.572 04:44:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:24.572 04:44:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:24.572 04:44:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:24.572 04:44:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:24.572 04:44:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:24.572 04:44:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:24.572 04:44:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:24.572 04:44:47 -- scripts/common.sh@335 -- # IFS=.-: 00:04:24.572 04:44:47 -- scripts/common.sh@335 -- # read -ra ver1 00:04:24.572 04:44:47 -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.572 04:44:47 -- scripts/common.sh@336 -- # read -ra ver2 00:04:24.572 04:44:47 -- scripts/common.sh@337 -- # local 'op=<' 00:04:24.572 04:44:47 -- scripts/common.sh@339 -- # ver1_l=2 00:04:24.572 04:44:47 -- scripts/common.sh@340 -- # ver2_l=1 00:04:24.572 04:44:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:24.572 04:44:47 -- scripts/common.sh@343 -- # case "$op" in 00:04:24.572 04:44:47 -- scripts/common.sh@344 -- # : 1 00:04:24.572 04:44:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:24.572 04:44:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.572 04:44:47 -- scripts/common.sh@364 -- # decimal 1 00:04:24.572 04:44:47 -- scripts/common.sh@352 -- # local d=1 00:04:24.572 04:44:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.572 04:44:47 -- scripts/common.sh@354 -- # echo 1 00:04:24.572 04:44:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:24.572 04:44:47 -- scripts/common.sh@365 -- # decimal 2 00:04:24.572 04:44:47 -- scripts/common.sh@352 -- # local d=2 00:04:24.572 04:44:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.572 04:44:47 -- scripts/common.sh@354 -- # echo 2 00:04:24.572 04:44:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:24.572 04:44:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:24.572 04:44:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:24.572 04:44:47 -- scripts/common.sh@367 -- # return 0 00:04:24.572 04:44:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.572 04:44:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:24.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.572 --rc genhtml_branch_coverage=1 00:04:24.572 --rc genhtml_function_coverage=1 00:04:24.572 --rc genhtml_legend=1 00:04:24.572 --rc geninfo_all_blocks=1 00:04:24.572 --rc geninfo_unexecuted_blocks=1 00:04:24.572 00:04:24.572 ' 00:04:24.572 04:44:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:24.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.572 --rc genhtml_branch_coverage=1 00:04:24.572 --rc genhtml_function_coverage=1 00:04:24.572 --rc genhtml_legend=1 00:04:24.572 --rc geninfo_all_blocks=1 00:04:24.572 --rc geninfo_unexecuted_blocks=1 00:04:24.572 00:04:24.572 ' 00:04:24.572 04:44:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:24.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.572 --rc genhtml_branch_coverage=1 00:04:24.572 --rc genhtml_function_coverage=1 00:04:24.572 --rc genhtml_legend=1 00:04:24.572 --rc geninfo_all_blocks=1 00:04:24.572 --rc geninfo_unexecuted_blocks=1 00:04:24.572 00:04:24.572 ' 00:04:24.572 04:44:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:24.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.572 --rc genhtml_branch_coverage=1 00:04:24.572 --rc genhtml_function_coverage=1 00:04:24.572 --rc genhtml_legend=1 00:04:24.572 --rc geninfo_all_blocks=1 00:04:24.572 --rc geninfo_unexecuted_blocks=1 00:04:24.572 00:04:24.572 ' 00:04:24.572 04:44:47 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:24.572 04:44:47 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:04:24.572 04:44:47 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:04:24.572 04:44:47 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:04:24.572 04:44:47 -- unit/unittest.sh@174 -- # [[ y == y ]] 00:04:24.572 04:44:47 -- unit/unittest.sh@175 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:24.572 04:44:47 -- unit/unittest.sh@176 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:24.572 04:44:47 -- unit/unittest.sh@178 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:04:39.454 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:39.454 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:39.454 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:39.454 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:39.454 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:39.454 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:18.166 04:45:40 -- unit/unittest.sh@182 -- # uname -m 00:05:18.167 04:45:40 -- unit/unittest.sh@182 -- # '[' x86_64 = aarch64 ']' 00:05:18.167 04:45:40 -- unit/unittest.sh@186 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:18.167 04:45:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.167 04:45:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.167 04:45:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.167 ************************************ 00:05:18.167 START TEST unittest_pci_event 00:05:18.167 ************************************ 00:05:18.167 04:45:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:18.167 00:05:18.167 00:05:18.167 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.167 http://cunit.sourceforge.net/ 00:05:18.167 00:05:18.167 00:05:18.167 Suite: pci_event 00:05:18.167 Test: test_pci_parse_event ...[2024-11-18 04:45:40.939787] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:18.167 [2024-11-18 04:45:40.940393] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:18.167 passed 00:05:18.167 00:05:18.167 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.167 suites 1 1 n/a 0 0 00:05:18.167 tests 1 1 1 0 0 00:05:18.167 asserts 15 15 15 0 n/a 00:05:18.167 00:05:18.167 Elapsed time = 0.001 seconds 00:05:18.167 ************************************ 00:05:18.167 END TEST unittest_pci_event 00:05:18.167 ************************************ 00:05:18.167 00:05:18.167 real 0m0.039s 00:05:18.167 user 0m0.016s 00:05:18.167 sys 0m0.016s 00:05:18.167 04:45:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.167 04:45:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.167 04:45:40 -- unit/unittest.sh@187 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:18.167 04:45:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.167 04:45:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.167 04:45:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.167 ************************************ 00:05:18.167 START TEST unittest_include 00:05:18.167 ************************************ 00:05:18.167 04:45:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:18.167 00:05:18.167 00:05:18.167 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.167 http://cunit.sourceforge.net/ 00:05:18.167 00:05:18.167 00:05:18.167 Suite: histogram 00:05:18.167 Test: histogram_test ...passed 00:05:18.167 Test: histogram_merge ...passed 00:05:18.167 00:05:18.167 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.167 suites 1 1 n/a 0 0 00:05:18.167 tests 2 2 2 0 0 00:05:18.167 asserts 50 50 50 0 n/a 00:05:18.167 00:05:18.167 Elapsed time = 0.006 seconds 00:05:18.167 ************************************ 00:05:18.167 END TEST unittest_include 00:05:18.167 ************************************ 00:05:18.167 00:05:18.167 real 0m0.036s 00:05:18.167 user 0m0.023s 00:05:18.167 sys 0m0.012s 00:05:18.167 04:45:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.167 04:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:18.167 04:45:41 -- unit/unittest.sh@188 -- # run_test unittest_bdev unittest_bdev 00:05:18.167 04:45:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.167 04:45:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.167 04:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:18.167 ************************************ 00:05:18.167 START TEST unittest_bdev 00:05:18.167 ************************************ 00:05:18.167 04:45:41 -- common/autotest_common.sh@1114 -- # unittest_bdev 00:05:18.167 04:45:41 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:18.167 00:05:18.167 00:05:18.167 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.167 http://cunit.sourceforge.net/ 00:05:18.167 00:05:18.167 00:05:18.167 Suite: bdev 00:05:18.167 Test: bytes_to_blocks_test ...passed 00:05:18.167 Test: num_blocks_test ...passed 00:05:18.167 Test: io_valid_test ...passed 00:05:18.167 Test: open_write_test ...[2024-11-18 04:45:41.141877] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:18.167 [2024-11-18 04:45:41.142176] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:18.167 [2024-11-18 04:45:41.142353] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:18.167 passed 00:05:18.167 Test: claim_test ...passed 00:05:18.167 Test: alias_add_del_test ...[2024-11-18 04:45:41.202294] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:18.167 [2024-11-18 04:45:41.202371] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:18.167 [2024-11-18 04:45:41.202442] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:18.167 passed 00:05:18.167 Test: get_device_stat_test ...passed 00:05:18.167 Test: bdev_io_types_test ...passed 00:05:18.167 Test: bdev_io_wait_test ...passed 00:05:18.167 Test: bdev_io_spans_split_test ...passed 00:05:18.167 Test: bdev_io_boundary_split_test ...passed 00:05:18.167 Test: bdev_io_max_size_and_segment_split_test ...[2024-11-18 04:45:41.315185] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:18.167 passed 00:05:18.167 Test: bdev_io_mix_split_test ...passed 00:05:18.167 Test: bdev_io_split_with_io_wait ...passed 00:05:18.167 Test: bdev_io_write_unit_split_test ...[2024-11-18 04:45:41.384525] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:18.167 [2024-11-18 04:45:41.384626] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:18.167 [2024-11-18 04:45:41.384679] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:18.167 [2024-11-18 04:45:41.384745] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:18.167 passed 00:05:18.167 Test: bdev_io_alignment_with_boundary ...passed 00:05:18.167 Test: bdev_io_alignment ...passed 00:05:18.167 Test: bdev_histograms ...passed 00:05:18.167 Test: bdev_write_zeroes ...passed 00:05:18.167 Test: bdev_compare_and_write ...passed 00:05:18.167 Test: bdev_compare ...passed 00:05:18.167 Test: bdev_compare_emulated ...passed 00:05:18.167 Test: bdev_zcopy_write ...passed 00:05:18.167 Test: bdev_zcopy_read ...passed 00:05:18.167 Test: bdev_open_while_hotremove ...passed 00:05:18.167 Test: bdev_close_while_hotremove ...passed 00:05:18.167 Test: bdev_open_ext_test ...[2024-11-18 04:45:41.681626] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:18.167 passed 00:05:18.167 Test: bdev_open_ext_unregister ...[2024-11-18 04:45:41.681834] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:18.167 passed 00:05:18.426 Test: bdev_set_io_timeout ...passed 00:05:18.426 Test: bdev_set_qd_sampling ...passed 00:05:18.426 Test: lba_range_overlap ...passed 00:05:18.426 Test: lock_lba_range_check_ranges ...passed 00:05:18.426 Test: lock_lba_range_with_io_outstanding ...passed 00:05:18.426 Test: lock_lba_range_overlapped ...passed 00:05:18.426 Test: bdev_quiesce ...[2024-11-18 04:45:41.801675] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:18.426 passed 00:05:18.426 Test: bdev_io_abort ...passed 00:05:18.426 Test: bdev_unmap ...passed 00:05:18.426 Test: bdev_write_zeroes_split_test ...passed 00:05:18.426 Test: bdev_set_options_test ...[2024-11-18 04:45:41.897485] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:18.426 passed 00:05:18.426 Test: bdev_get_memory_domains ...passed 00:05:18.426 Test: bdev_io_ext ...passed 00:05:18.426 Test: bdev_io_ext_no_opts ...passed 00:05:18.685 Test: bdev_io_ext_invalid_opts ...passed 00:05:18.685 Test: bdev_io_ext_split ...passed 00:05:18.685 Test: bdev_io_ext_bounce_buffer ...passed 00:05:18.685 Test: bdev_register_uuid_alias ...[2024-11-18 04:45:42.020450] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 5d0e49b1-8b4c-4220-9f15-38ae1132d64d already exists 00:05:18.685 [2024-11-18 04:45:42.020529] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:5d0e49b1-8b4c-4220-9f15-38ae1132d64d alias for bdev bdev0 00:05:18.685 passed 00:05:18.685 Test: bdev_unregister_by_name ...[2024-11-18 04:45:42.038080] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:18.685 [2024-11-18 04:45:42.038157] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:18.685 passed 00:05:18.685 Test: for_each_bdev_test ...passed 00:05:18.685 Test: bdev_seek_test ...passed 00:05:18.685 Test: bdev_copy ...passed 00:05:18.685 Test: bdev_copy_split_test ...passed 00:05:18.685 Test: examine_locks ...passed 00:05:18.685 Test: claim_v2_rwo ...[2024-11-18 04:45:42.109350] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.109422] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.109449] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.109465] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.109485] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.109516] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:18.685 passed 00:05:18.685 Test: claim_v2_rom ...[2024-11-18 04:45:42.109686] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.109723] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.109743] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.109757] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.109829] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:18.685 passed 00:05:18.685 Test: claim_v2_rwm ...[2024-11-18 04:45:42.109868] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:18.685 [2024-11-18 04:45:42.109984] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:18.685 [2024-11-18 04:45:42.110019] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:18.685 passed 00:05:18.685 Test: claim_v2_existing_writer ...[2024-11-18 04:45:42.110055] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.110070] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.110085] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:18.685 [2024-11-18 04:45:42.110100] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:18.686 [2024-11-18 04:45:42.110156] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:18.686 passed 00:05:18.686 Test: claim_v2_existing_v1 ...[2024-11-18 04:45:42.110342] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:18.686 [2024-11-18 04:45:42.110383] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:18.686 [2024-11-18 04:45:42.110502] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:18.686 [2024-11-18 04:45:42.110550] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:18.686 [2024-11-18 04:45:42.110565] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:18.686 passed 00:05:18.686 Test: claim_v1_existing_v2 ...[2024-11-18 04:45:42.110681] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:18.686 passed 00:05:18.686 Test: examine_claimed ...[2024-11-18 04:45:42.110724] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:18.686 [2024-11-18 04:45:42.110754] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:18.686 [2024-11-18 04:45:42.111019] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:18.686 passed 00:05:18.686 00:05:18.686 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.686 suites 1 1 n/a 0 0 00:05:18.686 tests 59 59 59 0 0 00:05:18.686 asserts 4599 4599 4599 0 n/a 00:05:18.686 00:05:18.686 Elapsed time = 1.008 seconds 00:05:18.686 04:45:42 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:18.686 00:05:18.686 00:05:18.686 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.686 http://cunit.sourceforge.net/ 00:05:18.686 00:05:18.686 00:05:18.686 Suite: nvme 00:05:18.686 Test: test_create_ctrlr ...passed 00:05:18.686 Test: test_reset_ctrlr ...[2024-11-18 04:45:42.156010] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 passed 00:05:18.686 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:18.686 Test: test_failover_ctrlr ...passed 00:05:18.686 Test: test_race_between_failover_and_add_secondary_trid ...[2024-11-18 04:45:42.158721] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.158963] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.159174] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 passed 00:05:18.686 Test: test_pending_reset ...[2024-11-18 04:45:42.160734] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.160984] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 passed 00:05:18.686 Test: test_attach_ctrlr ...[2024-11-18 04:45:42.162159] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:18.686 passed 00:05:18.686 Test: test_aer_cb ...passed 00:05:18.686 Test: test_submit_nvme_cmd ...passed 00:05:18.686 Test: test_add_remove_trid ...passed 00:05:18.686 Test: test_abort ...[2024-11-18 04:45:42.165463] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:18.686 passed 00:05:18.686 Test: test_get_io_qpair ...passed 00:05:18.686 Test: test_bdev_unregister ...passed 00:05:18.686 Test: test_compare_ns ...passed 00:05:18.686 Test: test_init_ana_log_page ...passed 00:05:18.686 Test: test_get_memory_domains ...passed 00:05:18.686 Test: test_reconnect_qpair ...[2024-11-18 04:45:42.168528] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 passed 00:05:18.686 Test: test_create_bdev_ctrlr ...[2024-11-18 04:45:42.169054] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:18.686 passed 00:05:18.686 Test: test_add_multi_ns_to_bdev ...[2024-11-18 04:45:42.170297] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:18.686 passed 00:05:18.686 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:18.686 Test: test_admin_path ...passed 00:05:18.686 Test: test_reset_bdev_ctrlr ...passed 00:05:18.686 Test: test_find_io_path ...passed 00:05:18.686 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:18.686 Test: test_retry_io_for_io_path_error ...passed 00:05:18.686 Test: test_retry_io_count ...passed 00:05:18.686 Test: test_concurrent_read_ana_log_page ...passed 00:05:18.686 Test: test_retry_io_for_ana_error ...passed 00:05:18.686 Test: test_check_io_error_resiliency_params ...[2024-11-18 04:45:42.177092] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:18.686 [2024-11-18 04:45:42.177146] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:18.686 [2024-11-18 04:45:42.177166] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:18.686 [2024-11-18 04:45:42.177182] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:18.686 [2024-11-18 04:45:42.177224] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:18.686 [2024-11-18 04:45:42.177256] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:18.686 [2024-11-18 04:45:42.177270] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:18.686 [2024-11-18 04:45:42.177299] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:18.686 [2024-11-18 04:45:42.177322] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:18.686 passed 00:05:18.686 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:05:18.686 Test: test_reconnect_ctrlr ...[2024-11-18 04:45:42.178058] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.178222] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.178514] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.178629] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.178776] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 passed 00:05:18.686 Test: test_retry_failover_ctrlr ...[2024-11-18 04:45:42.179128] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 passed 00:05:18.686 Test: test_fail_path ...[2024-11-18 04:45:42.179756] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.179912] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.180036] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.180141] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.180248] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 passed 00:05:18.686 Test: test_nvme_ns_cmp ...passed 00:05:18.686 Test: test_ana_transition ...passed 00:05:18.686 Test: test_set_preferred_path ...passed 00:05:18.686 Test: test_find_next_io_path ...passed 00:05:18.686 Test: test_find_io_path_min_qd ...passed 00:05:18.686 Test: test_disable_auto_failback ...[2024-11-18 04:45:42.182086] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 passed 00:05:18.686 Test: test_set_multipath_policy ...passed 00:05:18.686 Test: test_uuid_generation ...passed 00:05:18.686 Test: test_retry_io_to_same_path ...passed 00:05:18.686 Test: test_race_between_reset_and_disconnected ...passed 00:05:18.686 Test: test_ctrlr_op_rpc ...passed 00:05:18.686 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:18.686 Test: test_disable_enable_ctrlr ...[2024-11-18 04:45:42.186079] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 [2024-11-18 04:45:42.186270] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:18.686 passed 00:05:18.686 Test: test_delete_ctrlr_done ...passed 00:05:18.686 Test: test_ns_remove_during_reset ...passed 00:05:18.686 00:05:18.686 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.686 suites 1 1 n/a 0 0 00:05:18.686 tests 48 48 48 0 0 00:05:18.686 asserts 3553 3553 3553 0 n/a 00:05:18.686 00:05:18.686 Elapsed time = 0.033 seconds 00:05:18.946 04:45:42 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:18.946 Test Options 00:05:18.946 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:18.946 00:05:18.946 00:05:18.946 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.946 http://cunit.sourceforge.net/ 00:05:18.946 00:05:18.946 00:05:18.946 Suite: raid 00:05:18.946 Test: test_create_raid ...passed 00:05:18.946 Test: test_create_raid_superblock ...passed 00:05:18.946 Test: test_delete_raid ...passed 00:05:18.946 Test: test_create_raid_invalid_args ...[2024-11-18 04:45:42.236699] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:18.946 [2024-11-18 04:45:42.237081] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:18.946 [2024-11-18 04:45:42.237653] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:18.946 [2024-11-18 04:45:42.237901] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:18.946 [2024-11-18 04:45:42.238851] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:18.946 passed 00:05:18.946 Test: test_delete_raid_invalid_args ...passed 00:05:18.946 Test: test_io_channel ...passed 00:05:18.946 Test: test_reset_io ...passed 00:05:18.946 Test: test_write_io ...passed 00:05:18.946 Test: test_read_io ...passed 00:05:19.514 Test: test_unmap_io ...passed 00:05:19.514 Test: test_io_failure ...[2024-11-18 04:45:42.816243] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:19.514 passed 00:05:19.514 Test: test_multi_raid_no_io ...passed 00:05:19.514 Test: test_multi_raid_with_io ...passed 00:05:19.514 Test: test_io_type_supported ...passed 00:05:19.514 Test: test_raid_json_dump_info ...passed 00:05:19.514 Test: test_context_size ...passed 00:05:19.514 Test: test_raid_level_conversions ...passed 00:05:19.514 Test: test_raid_process ...passed 00:05:19.514 Test: test_raid_io_split ...passed 00:05:19.514 00:05:19.514 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.514 suites 1 1 n/a 0 0 00:05:19.514 tests 19 19 19 0 0 00:05:19.514 asserts 177879 177879 177879 0 n/a 00:05:19.514 00:05:19.514 Elapsed time = 0.589 seconds 00:05:19.514 04:45:42 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:19.514 00:05:19.514 00:05:19.514 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.514 http://cunit.sourceforge.net/ 00:05:19.514 00:05:19.514 00:05:19.514 Suite: raid_sb 00:05:19.514 Test: test_raid_bdev_write_superblock ...passed 00:05:19.514 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:19.514 Test: test_raid_bdev_parse_superblock ...[2024-11-18 04:45:42.855279] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:19.514 passed 00:05:19.514 00:05:19.514 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.514 suites 1 1 n/a 0 0 00:05:19.514 tests 3 3 3 0 0 00:05:19.514 asserts 32 32 32 0 n/a 00:05:19.514 00:05:19.514 Elapsed time = 0.001 seconds 00:05:19.514 04:45:42 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:19.514 00:05:19.514 00:05:19.514 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.514 http://cunit.sourceforge.net/ 00:05:19.514 00:05:19.514 00:05:19.514 Suite: concat 00:05:19.514 Test: test_concat_start ...passed 00:05:19.514 Test: test_concat_rw ...passed 00:05:19.514 Test: test_concat_null_payload ...passed 00:05:19.514 00:05:19.514 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.514 suites 1 1 n/a 0 0 00:05:19.514 tests 3 3 3 0 0 00:05:19.514 asserts 8097 8097 8097 0 n/a 00:05:19.514 00:05:19.514 Elapsed time = 0.006 seconds 00:05:19.514 04:45:42 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:19.514 00:05:19.514 00:05:19.514 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.514 http://cunit.sourceforge.net/ 00:05:19.514 00:05:19.514 00:05:19.514 Suite: raid1 00:05:19.514 Test: test_raid1_start ...passed 00:05:19.514 Test: test_raid1_read_balancing ...passed 00:05:19.514 00:05:19.514 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.514 suites 1 1 n/a 0 0 00:05:19.514 tests 2 2 2 0 0 00:05:19.514 asserts 2856 2856 2856 0 n/a 00:05:19.514 00:05:19.514 Elapsed time = 0.005 seconds 00:05:19.514 04:45:42 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:19.514 00:05:19.514 00:05:19.514 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.514 http://cunit.sourceforge.net/ 00:05:19.514 00:05:19.514 00:05:19.514 Suite: zone 00:05:19.514 Test: test_zone_get_operation ...passed 00:05:19.514 Test: test_bdev_zone_get_info ...passed 00:05:19.514 Test: test_bdev_zone_management ...passed 00:05:19.514 Test: test_bdev_zone_append ...passed 00:05:19.515 Test: test_bdev_zone_append_with_md ...passed 00:05:19.515 Test: test_bdev_zone_appendv ...passed 00:05:19.515 Test: test_bdev_zone_appendv_with_md ...passed 00:05:19.515 Test: test_bdev_io_get_append_location ...passed 00:05:19.515 00:05:19.515 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.515 suites 1 1 n/a 0 0 00:05:19.515 tests 8 8 8 0 0 00:05:19.515 asserts 94 94 94 0 n/a 00:05:19.515 00:05:19.515 Elapsed time = 0.000 seconds 00:05:19.515 04:45:42 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:19.515 00:05:19.515 00:05:19.515 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.515 http://cunit.sourceforge.net/ 00:05:19.515 00:05:19.515 00:05:19.515 Suite: gpt_parse 00:05:19.515 Test: test_parse_mbr_and_primary ...[2024-11-18 04:45:43.002235] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:19.515 [2024-11-18 04:45:43.002473] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:19.515 [2024-11-18 04:45:43.002549] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:19.515 [2024-11-18 04:45:43.002588] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:19.515 [2024-11-18 04:45:43.002633] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:19.515 [2024-11-18 04:45:43.002665] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:19.515 passed 00:05:19.515 Test: test_parse_secondary ...[2024-11-18 04:45:43.003458] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:19.515 [2024-11-18 04:45:43.003499] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:19.515 [2024-11-18 04:45:43.003541] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:19.515 [2024-11-18 04:45:43.003569] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:19.515 passed 00:05:19.515 Test: test_check_mbr ...passed 00:05:19.515 Test: test_read_header ...[2024-11-18 04:45:43.004329] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:19.515 [2024-11-18 04:45:43.004385] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:19.515 [2024-11-18 04:45:43.004498] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:19.515 [2024-11-18 04:45:43.004535] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:19.515 [2024-11-18 04:45:43.004574] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:19.515 [2024-11-18 04:45:43.004623] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:19.515 [2024-11-18 04:45:43.004661] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:19.515 [2024-11-18 04:45:43.004683] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:19.515 passed 00:05:19.515 Test: test_read_partitions ...[2024-11-18 04:45:43.004785] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:19.515 [2024-11-18 04:45:43.004817] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:19.515 [2024-11-18 04:45:43.004850] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:19.515 [2024-11-18 04:45:43.004872] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:19.515 [2024-11-18 04:45:43.005284] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:19.515 passed 00:05:19.515 00:05:19.515 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.515 suites 1 1 n/a 0 0 00:05:19.515 tests 5 5 5 0 0 00:05:19.515 asserts 33 33 33 0 n/a 00:05:19.515 00:05:19.515 Elapsed time = 0.004 seconds 00:05:19.515 04:45:43 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:19.775 00:05:19.775 00:05:19.775 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.775 http://cunit.sourceforge.net/ 00:05:19.775 00:05:19.775 00:05:19.775 Suite: bdev_part 00:05:19.775 Test: part_test ...[2024-11-18 04:45:43.038072] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:19.775 passed 00:05:19.775 Test: part_free_test ...passed 00:05:19.775 Test: part_get_io_channel_test ...passed 00:05:19.775 Test: part_construct_ext ...passed 00:05:19.775 00:05:19.775 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.775 suites 1 1 n/a 0 0 00:05:19.775 tests 4 4 4 0 0 00:05:19.775 asserts 48 48 48 0 n/a 00:05:19.775 00:05:19.775 Elapsed time = 0.041 seconds 00:05:19.775 04:45:43 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:19.775 00:05:19.775 00:05:19.775 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.775 http://cunit.sourceforge.net/ 00:05:19.775 00:05:19.775 00:05:19.775 Suite: scsi_nvme_suite 00:05:19.775 Test: scsi_nvme_translate_test ...passed 00:05:19.775 00:05:19.775 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.775 suites 1 1 n/a 0 0 00:05:19.775 tests 1 1 1 0 0 00:05:19.775 asserts 104 104 104 0 n/a 00:05:19.775 00:05:19.775 Elapsed time = 0.000 seconds 00:05:19.775 04:45:43 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:19.775 00:05:19.775 00:05:19.775 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.775 http://cunit.sourceforge.net/ 00:05:19.775 00:05:19.775 00:05:19.775 Suite: lvol 00:05:19.775 Test: ut_lvs_init ...[2024-11-18 04:45:43.141835] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:19.775 passed 00:05:19.775 Test: ut_lvol_init ...[2024-11-18 04:45:43.142286] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:19.775 passed 00:05:19.775 Test: ut_lvol_snapshot ...passed 00:05:19.775 Test: ut_lvol_clone ...passed 00:05:19.775 Test: ut_lvs_destroy ...passed 00:05:19.775 Test: ut_lvs_unload ...passed 00:05:19.775 Test: ut_lvol_resize ...[2024-11-18 04:45:43.144097] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:19.775 passed 00:05:19.775 Test: ut_lvol_set_read_only ...passed 00:05:19.775 Test: ut_lvol_hotremove ...passed 00:05:19.775 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:19.775 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:19.775 Test: ut_lvol_read_write ...passed 00:05:19.775 Test: ut_vbdev_lvol_submit_request ...passed 00:05:19.775 Test: ut_lvol_examine_config ...passed 00:05:19.775 Test: ut_lvol_examine_disk ...[2024-11-18 04:45:43.144998] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:19.775 passed 00:05:19.775 Test: ut_lvol_rename ...[2024-11-18 04:45:43.146074] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:19.775 [2024-11-18 04:45:43.146127] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:19.775 passed 00:05:19.775 Test: ut_bdev_finish ...passed 00:05:19.775 Test: ut_lvs_rename ...passed 00:05:19.775 Test: ut_lvol_seek ...passed 00:05:19.775 Test: ut_esnap_dev_create ...passed 00:05:19.775 Test: ut_lvol_esnap_clone_bad_args ...passed 00:05:19.775 00:05:19.775 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.775 suites 1 1 n/a 0 0 00:05:19.775 tests 21 21 21 0 0 00:05:19.775 asserts 712 712 712 0 n/a 00:05:19.775 00:05:19.775 Elapsed time = 0.006 seconds 00:05:19.775 [2024-11-18 04:45:43.146948] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:19.775 [2024-11-18 04:45:43.147005] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:19.775 [2024-11-18 04:45:43.147038] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:19.775 [2024-11-18 04:45:43.147076] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:19.775 [2024-11-18 04:45:43.147212] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:19.775 [2024-11-18 04:45:43.147248] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:19.775 04:45:43 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:19.775 00:05:19.775 00:05:19.775 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.775 http://cunit.sourceforge.net/ 00:05:19.775 00:05:19.775 00:05:19.775 Suite: zone_block 00:05:19.775 Test: test_zone_block_create ...passed 00:05:19.775 Test: test_zone_block_create_invalid ...[2024-11-18 04:45:43.198909] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:19.775 [2024-11-18 04:45:43.199126] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-18 04:45:43.199496] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:19.775 [2024-11-18 04:45:43.199660] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-18 04:45:43.199889] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:19.775 [2024-11-18 04:45:43.200110] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:19.775 Test: test_get_zone_info ...passed 00:05:19.775 Test: test_supported_io_types ...passed 00:05:19.776 Test: test_reset_zone ...[2024-11-18 04:45:43.200324] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:19.776 [2024-11-18 04:45:43.200353] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-11-18 04:45:43.200860] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.200955] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.201006] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.201566] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 passed 00:05:19.776 Test: test_open_zone ...[2024-11-18 04:45:43.201612] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.201911] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_requepassed 00:05:19.776 Test: test_zone_write ...st: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.202559] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.202623] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.202922] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:19.776 [2024-11-18 04:45:43.202950] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.203008] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:19.776 [2024-11-18 04:45:43.203021] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.207336] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:19.776 [2024-11-18 04:45:43.207373] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.207444] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:19.776 [2024-11-18 04:45:43.207465] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 passed 00:05:19.776 Test: test_zone_read ...[2024-11-18 04:45:43.211525] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:19.776 [2024-11-18 04:45:43.211573] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.211877] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:19.776 [2024-11-18 04:45:43.211906] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.211949] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:19.776 [2024-11-18 04:45:43.211972] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 passed 00:05:19.776 Test: test_close_zone ...[2024-11-18 04:45:43.212323] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:19.776 [2024-11-18 04:45:43.212351] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.212556] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.212624] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 passed 00:05:19.776 Test: test_finish_zone ...[2024-11-18 04:45:43.212773] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.212798] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 passed 00:05:19.776 Test: test_append_zone ...[2024-11-18 04:45:43.213204] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.213247] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.213495] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:19.776 [2024-11-18 04:45:43.213518] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 [2024-11-18 04:45:43.213567] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:19.776 [2024-11-18 04:45:43.213590] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 passed 00:05:19.776 00:05:19.776 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.776 suites 1 1 n/a 0 0 00:05:19.776 tests 11 11 11 0 0 00:05:19.776 asserts 3437 3437 3437 0 n/a 00:05:19.776 00:05:19.776 Elapsed time = 0.023 seconds 00:05:19.776 [2024-11-18 04:45:43.221969] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:19.776 [2024-11-18 04:45:43.222009] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:19.776 04:45:43 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:19.776 00:05:19.776 00:05:19.776 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.776 http://cunit.sourceforge.net/ 00:05:19.776 00:05:19.776 00:05:19.776 Suite: bdev 00:05:20.035 Test: basic ...[2024-11-18 04:45:43.307271] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5b0f153a5ec1): Operation not permitted (rc=-1) 00:05:20.035 [2024-11-18 04:45:43.307729] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x5130000003c0 (0x5b0f153a5e80): Operation not permitted (rc=-1) 00:05:20.035 [2024-11-18 04:45:43.307790] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5b0f153a5ec1): Operation not permitted (rc=-1) 00:05:20.035 passed 00:05:20.035 Test: unregister_and_close ...passed 00:05:20.035 Test: unregister_and_close_different_threads ...passed 00:05:20.035 Test: basic_qos ...passed 00:05:20.035 Test: put_channel_during_reset ...passed 00:05:20.035 Test: aborted_reset ...passed 00:05:20.035 Test: aborted_reset_no_outstanding_io ...passed 00:05:20.294 Test: io_during_reset ...passed 00:05:20.294 Test: reset_completions ...passed 00:05:20.294 Test: io_during_qos_queue ...passed 00:05:20.294 Test: io_during_qos_reset ...passed 00:05:20.294 Test: enomem ...passed 00:05:20.294 Test: enomem_multi_bdev ...passed 00:05:20.294 Test: enomem_multi_bdev_unregister ...passed 00:05:20.294 Test: enomem_multi_io_target ...passed 00:05:20.294 Test: qos_dynamic_enable ...passed 00:05:20.294 Test: bdev_histograms_mt ...passed 00:05:20.554 Test: bdev_set_io_timeout_mt ...passed 00:05:20.554 Test: lock_lba_range_then_submit_io ...[2024-11-18 04:45:43.823365] thread.c: 467:spdk_thread_lib_fini: *ERROR*: io_device 0x5130000003c0 not unregistered 00:05:20.554 [2024-11-18 04:45:43.829594] thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x5b0f153a5e40 already registered (old:0x5130000003c0 new:0x513000000c80) 00:05:20.554 passed 00:05:20.554 Test: unregister_during_reset ...passed 00:05:20.554 Test: event_notify_and_close ...passed 00:05:20.554 Test: unregister_and_qos_poller ...passed 00:05:20.554 Suite: bdev_wrong_thread 00:05:20.554 Test: spdk_bdev_register_wt ...passed 00:05:20.554 Test: spdk_bdev_examine_wt ...[2024-11-18 04:45:43.915018] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x518000001480 (0x518000001480) 00:05:20.554 [2024-11-18 04:45:43.915302] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x518000001480 (0x518000001480) 00:05:20.554 passed 00:05:20.554 00:05:20.554 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.554 suites 2 2 n/a 0 0 00:05:20.554 tests 24 24 24 0 0 00:05:20.554 asserts 621 621 621 0 n/a 00:05:20.554 00:05:20.554 Elapsed time = 0.619 seconds 00:05:20.554 00:05:20.554 real 0m2.859s 00:05:20.554 user 0m1.279s 00:05:20.554 sys 0m1.582s 00:05:20.554 04:45:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.554 04:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.554 ************************************ 00:05:20.554 END TEST unittest_bdev 00:05:20.554 ************************************ 00:05:20.554 04:45:43 -- unit/unittest.sh@189 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:20.554 04:45:43 -- unit/unittest.sh@194 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:20.554 04:45:43 -- unit/unittest.sh@199 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:20.554 04:45:43 -- unit/unittest.sh@203 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:20.554 04:45:43 -- unit/unittest.sh@204 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:20.554 04:45:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.554 04:45:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.554 04:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.554 ************************************ 00:05:20.554 START TEST unittest_bdev_raid5f 00:05:20.554 ************************************ 00:05:20.554 04:45:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:20.554 00:05:20.554 00:05:20.554 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.554 http://cunit.sourceforge.net/ 00:05:20.554 00:05:20.554 00:05:20.554 Suite: raid5f 00:05:20.554 Test: test_raid5f_start ...passed 00:05:21.122 Test: test_raid5f_submit_read_request ...passed 00:05:21.122 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:05:24.469 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:05:39.368 Test: test_raid5f_chunk_write_error ...passed 00:05:45.935 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:05:48.519 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:06:15.085 Test: test_raid5f_submit_read_request_degraded ...passed 00:06:15.085 00:06:15.085 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.085 suites 1 1 n/a 0 0 00:06:15.085 tests 8 8 8 0 0 00:06:15.085 asserts 351864 351864 351864 0 n/a 00:06:15.085 00:06:15.085 Elapsed time = 52.122 seconds 00:06:15.085 00:06:15.085 real 0m52.218s 00:06:15.085 user 0m49.875s 00:06:15.085 sys 0m2.328s 00:06:15.085 04:46:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.085 ************************************ 00:06:15.085 END TEST unittest_bdev_raid5f 00:06:15.085 ************************************ 00:06:15.085 04:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:15.085 04:46:36 -- unit/unittest.sh@207 -- # run_test unittest_blob_blobfs unittest_blob 00:06:15.085 04:46:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.085 04:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.085 04:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:15.085 ************************************ 00:06:15.085 START TEST unittest_blob_blobfs 00:06:15.085 ************************************ 00:06:15.085 04:46:36 -- common/autotest_common.sh@1114 -- # unittest_blob 00:06:15.085 04:46:36 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:06:15.085 04:46:36 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:06:15.085 00:06:15.085 00:06:15.085 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.085 http://cunit.sourceforge.net/ 00:06:15.085 00:06:15.085 00:06:15.085 Suite: blob_nocopy_noextent 00:06:15.085 Test: blob_init ...[2024-11-18 04:46:36.298494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:15.085 passed 00:06:15.085 Test: blob_thin_provision ...passed 00:06:15.085 Test: blob_read_only ...passed 00:06:15.085 Test: bs_load ...[2024-11-18 04:46:36.380978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:15.085 passed 00:06:15.085 Test: bs_load_custom_cluster_size ...passed 00:06:15.085 Test: bs_load_after_failed_grow ...passed 00:06:15.085 Test: bs_cluster_sz ...[2024-11-18 04:46:36.401071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:15.085 [2024-11-18 04:46:36.401449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:15.085 [2024-11-18 04:46:36.401529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:15.085 passed 00:06:15.085 Test: bs_resize_md ...passed 00:06:15.085 Test: bs_destroy ...passed 00:06:15.085 Test: bs_type ...passed 00:06:15.085 Test: bs_super_block ...passed 00:06:15.085 Test: bs_test_recover_cluster_count ...passed 00:06:15.085 Test: bs_grow_live ...passed 00:06:15.085 Test: bs_grow_live_no_space ...passed 00:06:15.085 Test: bs_test_grow ...passed 00:06:15.085 Test: blob_serialize_test ...passed 00:06:15.085 Test: super_block_crc ...passed 00:06:15.085 Test: blob_thin_prov_write_count_io ...passed 00:06:15.085 Test: bs_load_iter_test ...passed 00:06:15.085 Test: blob_relations ...[2024-11-18 04:46:36.520555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.085 [2024-11-18 04:46:36.520665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 [2024-11-18 04:46:36.521581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.085 [2024-11-18 04:46:36.521622] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 passed 00:06:15.085 Test: blob_relations2 ...[2024-11-18 04:46:36.532116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.085 [2024-11-18 04:46:36.532210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 [2024-11-18 04:46:36.532262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.085 [2024-11-18 04:46:36.532278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 [2024-11-18 04:46:36.533784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.085 [2024-11-18 04:46:36.533829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 [2024-11-18 04:46:36.534304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.085 [2024-11-18 04:46:36.534347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 passed 00:06:15.085 Test: blob_relations3 ...passed 00:06:15.085 Test: blobstore_clean_power_failure ...passed 00:06:15.085 Test: blob_delete_snapshot_power_failure ...[2024-11-18 04:46:36.638901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:15.085 [2024-11-18 04:46:36.647748] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:15.085 [2024-11-18 04:46:36.647826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:15.085 [2024-11-18 04:46:36.647852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 [2024-11-18 04:46:36.656420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:15.085 [2024-11-18 04:46:36.656482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:15.085 [2024-11-18 04:46:36.656508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:15.085 [2024-11-18 04:46:36.656529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 [2024-11-18 04:46:36.664794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:15.085 [2024-11-18 04:46:36.664883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 [2024-11-18 04:46:36.672727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:15.085 [2024-11-18 04:46:36.672820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 [2024-11-18 04:46:36.680937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:15.085 [2024-11-18 04:46:36.681013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.085 passed 00:06:15.085 Test: blob_create_snapshot_power_failure ...[2024-11-18 04:46:36.704274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:15.085 [2024-11-18 04:46:36.718923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:15.085 [2024-11-18 04:46:36.726634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:15.085 passed 00:06:15.085 Test: blob_io_unit ...passed 00:06:15.085 Test: blob_io_unit_compatibility ...passed 00:06:15.085 Test: blob_ext_md_pages ...passed 00:06:15.085 Test: blob_esnap_io_4096_4096 ...passed 00:06:15.085 Test: blob_esnap_io_512_512 ...passed 00:06:15.085 Test: blob_esnap_io_4096_512 ...passed 00:06:15.085 Test: blob_esnap_io_512_4096 ...passed 00:06:15.085 Suite: blob_bs_nocopy_noextent 00:06:15.085 Test: blob_open ...passed 00:06:15.085 Test: blob_create ...[2024-11-18 04:46:36.903539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:15.085 passed 00:06:15.085 Test: blob_create_loop ...passed 00:06:15.085 Test: blob_create_fail ...[2024-11-18 04:46:36.982612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:15.085 passed 00:06:15.085 Test: blob_create_internal ...passed 00:06:15.085 Test: blob_create_zero_extent ...passed 00:06:15.085 Test: blob_snapshot ...passed 00:06:15.085 Test: blob_clone ...passed 00:06:15.085 Test: blob_inflate ...[2024-11-18 04:46:37.107579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:15.085 passed 00:06:15.085 Test: blob_delete ...passed 00:06:15.085 Test: blob_resize_test ...[2024-11-18 04:46:37.151872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:15.085 passed 00:06:15.085 Test: channel_ops ...passed 00:06:15.085 Test: blob_super ...passed 00:06:15.085 Test: blob_rw_verify_iov ...passed 00:06:15.085 Test: blob_unmap ...passed 00:06:15.085 Test: blob_iter ...passed 00:06:15.085 Test: blob_parse_md ...passed 00:06:15.085 Test: bs_load_pending_removal ...passed 00:06:15.085 Test: bs_unload ...[2024-11-18 04:46:37.332614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:15.085 passed 00:06:15.085 Test: bs_usable_clusters ...passed 00:06:15.085 Test: blob_crc ...[2024-11-18 04:46:37.377424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:15.085 [2024-11-18 04:46:37.377572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:15.085 passed 00:06:15.085 Test: blob_flags ...passed 00:06:15.085 Test: bs_version ...passed 00:06:15.086 Test: blob_set_xattrs_test ...[2024-11-18 04:46:37.446053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:15.086 [2024-11-18 04:46:37.446136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:15.086 passed 00:06:15.086 Test: blob_thin_prov_alloc ...passed 00:06:15.086 Test: blob_insert_cluster_msg_test ...passed 00:06:15.086 Test: blob_thin_prov_rw ...passed 00:06:15.086 Test: blob_thin_prov_rle ...passed 00:06:15.086 Test: blob_thin_prov_rw_iov ...passed 00:06:15.086 Test: blob_snapshot_rw ...passed 00:06:15.086 Test: blob_snapshot_rw_iov ...passed 00:06:15.086 Test: blob_inflate_rw ...passed 00:06:15.086 Test: blob_snapshot_freeze_io ...passed 00:06:15.086 Test: blob_operation_split_rw ...passed 00:06:15.086 Test: blob_operation_split_rw_iov ...passed 00:06:15.086 Test: blob_simultaneous_operations ...[2024-11-18 04:46:38.232829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:15.086 [2024-11-18 04:46:38.232938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.086 [2024-11-18 04:46:38.234196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:15.086 [2024-11-18 04:46:38.234258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.086 [2024-11-18 04:46:38.244367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:15.086 [2024-11-18 04:46:38.244428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.086 [2024-11-18 04:46:38.244541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:15.086 [2024-11-18 04:46:38.244563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.086 passed 00:06:15.086 Test: blob_persist_test ...passed 00:06:15.086 Test: blob_decouple_snapshot ...passed 00:06:15.086 Test: blob_seek_io_unit ...passed 00:06:15.086 Test: blob_nested_freezes ...passed 00:06:15.086 Suite: blob_blob_nocopy_noextent 00:06:15.086 Test: blob_write ...passed 00:06:15.086 Test: blob_read ...passed 00:06:15.086 Test: blob_rw_verify ...passed 00:06:15.086 Test: blob_rw_verify_iov_nomem ...passed 00:06:15.086 Test: blob_rw_iov_read_only ...passed 00:06:15.086 Test: blob_xattr ...passed 00:06:15.086 Test: blob_dirty_shutdown ...passed 00:06:15.086 Test: blob_is_degraded ...passed 00:06:15.086 Suite: blob_esnap_bs_nocopy_noextent 00:06:15.086 Test: blob_esnap_create ...passed 00:06:15.086 Test: blob_esnap_thread_add_remove ...passed 00:06:15.086 Test: blob_esnap_clone_snapshot ...passed 00:06:15.376 Test: blob_esnap_clone_inflate ...passed 00:06:15.376 Test: blob_esnap_clone_decouple ...passed 00:06:15.376 Test: blob_esnap_clone_reload ...passed 00:06:15.376 Test: blob_esnap_hotplug ...passed 00:06:15.376 Suite: blob_nocopy_extent 00:06:15.376 Test: blob_init ...[2024-11-18 04:46:38.686687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:15.376 passed 00:06:15.376 Test: blob_thin_provision ...passed 00:06:15.376 Test: blob_read_only ...passed 00:06:15.376 Test: bs_load ...[2024-11-18 04:46:38.715990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:15.376 passed 00:06:15.376 Test: bs_load_custom_cluster_size ...passed 00:06:15.376 Test: bs_load_after_failed_grow ...passed 00:06:15.376 Test: bs_cluster_sz ...[2024-11-18 04:46:38.733432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:15.376 [2024-11-18 04:46:38.733683] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:15.376 [2024-11-18 04:46:38.733787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:15.376 passed 00:06:15.376 Test: bs_resize_md ...passed 00:06:15.376 Test: bs_destroy ...passed 00:06:15.376 Test: bs_type ...passed 00:06:15.376 Test: bs_super_block ...passed 00:06:15.376 Test: bs_test_recover_cluster_count ...passed 00:06:15.376 Test: bs_grow_live ...passed 00:06:15.376 Test: bs_grow_live_no_space ...passed 00:06:15.376 Test: bs_test_grow ...passed 00:06:15.376 Test: blob_serialize_test ...passed 00:06:15.376 Test: super_block_crc ...passed 00:06:15.376 Test: blob_thin_prov_write_count_io ...passed 00:06:15.376 Test: bs_load_iter_test ...passed 00:06:15.376 Test: blob_relations ...[2024-11-18 04:46:38.841495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.376 [2024-11-18 04:46:38.841588] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.376 [2024-11-18 04:46:38.842725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.376 [2024-11-18 04:46:38.842774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.376 passed 00:06:15.376 Test: blob_relations2 ...[2024-11-18 04:46:38.852771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.376 [2024-11-18 04:46:38.852840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.376 [2024-11-18 04:46:38.852865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.376 [2024-11-18 04:46:38.852877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.376 [2024-11-18 04:46:38.854466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.376 [2024-11-18 04:46:38.854510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.376 [2024-11-18 04:46:38.854965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:15.376 [2024-11-18 04:46:38.854996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.376 passed 00:06:15.376 Test: blob_relations3 ...passed 00:06:15.658 Test: blobstore_clean_power_failure ...passed 00:06:15.658 Test: blob_delete_snapshot_power_failure ...[2024-11-18 04:46:38.956947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:15.658 [2024-11-18 04:46:38.965241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:15.658 [2024-11-18 04:46:38.973611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:15.658 [2024-11-18 04:46:38.973696] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:15.658 [2024-11-18 04:46:38.973750] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.658 [2024-11-18 04:46:38.981988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:15.659 [2024-11-18 04:46:38.982047] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:15.659 [2024-11-18 04:46:38.982071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:15.659 [2024-11-18 04:46:38.982091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.659 [2024-11-18 04:46:38.990380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:15.659 [2024-11-18 04:46:38.990443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:15.659 [2024-11-18 04:46:38.990466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:15.659 [2024-11-18 04:46:38.990488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.659 [2024-11-18 04:46:38.998787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:15.659 [2024-11-18 04:46:38.998894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.659 [2024-11-18 04:46:39.007333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:15.659 [2024-11-18 04:46:39.007439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.659 [2024-11-18 04:46:39.016228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:15.659 [2024-11-18 04:46:39.016355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:15.659 passed 00:06:15.659 Test: blob_create_snapshot_power_failure ...[2024-11-18 04:46:39.040759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:15.659 [2024-11-18 04:46:39.048682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:15.659 [2024-11-18 04:46:39.064785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:15.659 [2024-11-18 04:46:39.073998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:15.659 passed 00:06:15.659 Test: blob_io_unit ...passed 00:06:15.659 Test: blob_io_unit_compatibility ...passed 00:06:15.659 Test: blob_ext_md_pages ...passed 00:06:15.659 Test: blob_esnap_io_4096_4096 ...passed 00:06:15.926 Test: blob_esnap_io_512_512 ...passed 00:06:15.926 Test: blob_esnap_io_4096_512 ...passed 00:06:15.926 Test: blob_esnap_io_512_4096 ...passed 00:06:15.926 Suite: blob_bs_nocopy_extent 00:06:15.926 Test: blob_open ...passed 00:06:15.926 Test: blob_create ...[2024-11-18 04:46:39.245290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:15.926 passed 00:06:15.926 Test: blob_create_loop ...passed 00:06:15.926 Test: blob_create_fail ...[2024-11-18 04:46:39.327340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:15.926 passed 00:06:15.926 Test: blob_create_internal ...passed 00:06:15.927 Test: blob_create_zero_extent ...passed 00:06:15.927 Test: blob_snapshot ...passed 00:06:15.927 Test: blob_clone ...passed 00:06:16.185 Test: blob_inflate ...[2024-11-18 04:46:39.456493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:16.185 passed 00:06:16.185 Test: blob_delete ...passed 00:06:16.185 Test: blob_resize_test ...[2024-11-18 04:46:39.497130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:16.185 passed 00:06:16.185 Test: channel_ops ...passed 00:06:16.185 Test: blob_super ...passed 00:06:16.185 Test: blob_rw_verify_iov ...passed 00:06:16.185 Test: blob_unmap ...passed 00:06:16.185 Test: blob_iter ...passed 00:06:16.185 Test: blob_parse_md ...passed 00:06:16.185 Test: bs_load_pending_removal ...passed 00:06:16.185 Test: bs_unload ...[2024-11-18 04:46:39.661539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:16.185 passed 00:06:16.185 Test: bs_usable_clusters ...passed 00:06:16.185 Test: blob_crc ...[2024-11-18 04:46:39.705939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:16.185 [2024-11-18 04:46:39.706111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:16.445 passed 00:06:16.445 Test: blob_flags ...passed 00:06:16.445 Test: bs_version ...passed 00:06:16.445 Test: blob_set_xattrs_test ...[2024-11-18 04:46:39.768935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:16.445 [2024-11-18 04:46:39.769026] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:16.445 passed 00:06:16.445 Test: blob_thin_prov_alloc ...passed 00:06:16.445 Test: blob_insert_cluster_msg_test ...passed 00:06:16.445 Test: blob_thin_prov_rw ...passed 00:06:16.445 Test: blob_thin_prov_rle ...passed 00:06:16.445 Test: blob_thin_prov_rw_iov ...passed 00:06:16.703 Test: blob_snapshot_rw ...passed 00:06:16.703 Test: blob_snapshot_rw_iov ...passed 00:06:16.703 Test: blob_inflate_rw ...passed 00:06:16.962 Test: blob_snapshot_freeze_io ...passed 00:06:16.962 Test: blob_operation_split_rw ...passed 00:06:16.962 Test: blob_operation_split_rw_iov ...passed 00:06:17.222 Test: blob_simultaneous_operations ...[2024-11-18 04:46:40.495747] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:17.222 [2024-11-18 04:46:40.495841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.222 [2024-11-18 04:46:40.496859] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:17.222 [2024-11-18 04:46:40.496891] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.222 [2024-11-18 04:46:40.506577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:17.222 [2024-11-18 04:46:40.506630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.222 [2024-11-18 04:46:40.506726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:17.222 [2024-11-18 04:46:40.506746] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.222 passed 00:06:17.222 Test: blob_persist_test ...passed 00:06:17.222 Test: blob_decouple_snapshot ...passed 00:06:17.222 Test: blob_seek_io_unit ...passed 00:06:17.222 Test: blob_nested_freezes ...passed 00:06:17.222 Suite: blob_blob_nocopy_extent 00:06:17.222 Test: blob_write ...passed 00:06:17.222 Test: blob_read ...passed 00:06:17.222 Test: blob_rw_verify ...passed 00:06:17.222 Test: blob_rw_verify_iov_nomem ...passed 00:06:17.222 Test: blob_rw_iov_read_only ...passed 00:06:17.481 Test: blob_xattr ...passed 00:06:17.481 Test: blob_dirty_shutdown ...passed 00:06:17.481 Test: blob_is_degraded ...passed 00:06:17.481 Suite: blob_esnap_bs_nocopy_extent 00:06:17.481 Test: blob_esnap_create ...passed 00:06:17.481 Test: blob_esnap_thread_add_remove ...passed 00:06:17.481 Test: blob_esnap_clone_snapshot ...passed 00:06:17.481 Test: blob_esnap_clone_inflate ...passed 00:06:17.481 Test: blob_esnap_clone_decouple ...passed 00:06:17.481 Test: blob_esnap_clone_reload ...passed 00:06:17.481 Test: blob_esnap_hotplug ...passed 00:06:17.481 Suite: blob_copy_noextent 00:06:17.481 Test: blob_init ...[2024-11-18 04:46:40.955981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:17.482 passed 00:06:17.482 Test: blob_thin_provision ...passed 00:06:17.482 Test: blob_read_only ...passed 00:06:17.482 Test: bs_load ...[2024-11-18 04:46:40.982431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:17.482 passed 00:06:17.482 Test: bs_load_custom_cluster_size ...passed 00:06:17.482 Test: bs_load_after_failed_grow ...passed 00:06:17.482 Test: bs_cluster_sz ...[2024-11-18 04:46:40.997375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:17.482 [2024-11-18 04:46:40.997528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:17.482 [2024-11-18 04:46:40.997564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:17.740 passed 00:06:17.740 Test: bs_resize_md ...passed 00:06:17.740 Test: bs_destroy ...passed 00:06:17.740 Test: bs_type ...passed 00:06:17.740 Test: bs_super_block ...passed 00:06:17.740 Test: bs_test_recover_cluster_count ...passed 00:06:17.740 Test: bs_grow_live ...passed 00:06:17.740 Test: bs_grow_live_no_space ...passed 00:06:17.740 Test: bs_test_grow ...passed 00:06:17.740 Test: blob_serialize_test ...passed 00:06:17.740 Test: super_block_crc ...passed 00:06:17.740 Test: blob_thin_prov_write_count_io ...passed 00:06:17.740 Test: bs_load_iter_test ...passed 00:06:17.740 Test: blob_relations ...[2024-11-18 04:46:41.099592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:17.740 [2024-11-18 04:46:41.099694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.740 [2024-11-18 04:46:41.100323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:17.740 [2024-11-18 04:46:41.100351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.740 passed 00:06:17.740 Test: blob_relations2 ...[2024-11-18 04:46:41.110969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:17.740 [2024-11-18 04:46:41.111057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.740 [2024-11-18 04:46:41.111081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:17.741 [2024-11-18 04:46:41.111094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.741 [2024-11-18 04:46:41.112178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:17.741 [2024-11-18 04:46:41.112223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.741 [2024-11-18 04:46:41.112511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:17.741 [2024-11-18 04:46:41.112535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.741 passed 00:06:17.741 Test: blob_relations3 ...passed 00:06:17.741 Test: blobstore_clean_power_failure ...passed 00:06:17.741 Test: blob_delete_snapshot_power_failure ...[2024-11-18 04:46:41.212046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:17.741 [2024-11-18 04:46:41.220549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:17.741 [2024-11-18 04:46:41.220645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:17.741 [2024-11-18 04:46:41.220667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.741 [2024-11-18 04:46:41.228849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:17.741 [2024-11-18 04:46:41.228927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:17.741 [2024-11-18 04:46:41.228943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:17.741 [2024-11-18 04:46:41.228962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.741 [2024-11-18 04:46:41.236785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:17.741 [2024-11-18 04:46:41.236896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.741 [2024-11-18 04:46:41.244998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:17.741 [2024-11-18 04:46:41.245102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.741 [2024-11-18 04:46:41.253912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:17.741 [2024-11-18 04:46:41.253981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:17.999 passed 00:06:17.999 Test: blob_create_snapshot_power_failure ...[2024-11-18 04:46:41.280167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:17.999 [2024-11-18 04:46:41.295263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:17.999 [2024-11-18 04:46:41.303137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:17.999 passed 00:06:17.999 Test: blob_io_unit ...passed 00:06:17.999 Test: blob_io_unit_compatibility ...passed 00:06:17.999 Test: blob_ext_md_pages ...passed 00:06:17.999 Test: blob_esnap_io_4096_4096 ...passed 00:06:17.999 Test: blob_esnap_io_512_512 ...passed 00:06:18.000 Test: blob_esnap_io_4096_512 ...passed 00:06:18.000 Test: blob_esnap_io_512_4096 ...passed 00:06:18.000 Suite: blob_bs_copy_noextent 00:06:18.000 Test: blob_open ...passed 00:06:18.000 Test: blob_create ...[2024-11-18 04:46:41.467360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:18.000 passed 00:06:18.259 Test: blob_create_loop ...passed 00:06:18.259 Test: blob_create_fail ...[2024-11-18 04:46:41.542027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:18.259 passed 00:06:18.259 Test: blob_create_internal ...passed 00:06:18.259 Test: blob_create_zero_extent ...passed 00:06:18.259 Test: blob_snapshot ...passed 00:06:18.259 Test: blob_clone ...passed 00:06:18.259 Test: blob_inflate ...[2024-11-18 04:46:41.649108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:18.259 passed 00:06:18.259 Test: blob_delete ...passed 00:06:18.259 Test: blob_resize_test ...[2024-11-18 04:46:41.690304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:18.259 passed 00:06:18.259 Test: channel_ops ...passed 00:06:18.259 Test: blob_super ...passed 00:06:18.259 Test: blob_rw_verify_iov ...passed 00:06:18.259 Test: blob_unmap ...passed 00:06:18.518 Test: blob_iter ...passed 00:06:18.518 Test: blob_parse_md ...passed 00:06:18.518 Test: bs_load_pending_removal ...passed 00:06:18.518 Test: bs_unload ...[2024-11-18 04:46:41.871300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:18.518 passed 00:06:18.518 Test: bs_usable_clusters ...passed 00:06:18.518 Test: blob_crc ...[2024-11-18 04:46:41.918459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:18.518 [2024-11-18 04:46:41.918592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:18.518 passed 00:06:18.518 Test: blob_flags ...passed 00:06:18.518 Test: bs_version ...passed 00:06:18.518 Test: blob_set_xattrs_test ...[2024-11-18 04:46:41.987322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:18.518 [2024-11-18 04:46:41.987641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:18.518 passed 00:06:18.776 Test: blob_thin_prov_alloc ...passed 00:06:18.776 Test: blob_insert_cluster_msg_test ...passed 00:06:18.776 Test: blob_thin_prov_rw ...passed 00:06:18.776 Test: blob_thin_prov_rle ...passed 00:06:18.776 Test: blob_thin_prov_rw_iov ...passed 00:06:18.776 Test: blob_snapshot_rw ...passed 00:06:18.776 Test: blob_snapshot_rw_iov ...passed 00:06:19.035 Test: blob_inflate_rw ...passed 00:06:19.035 Test: blob_snapshot_freeze_io ...passed 00:06:19.293 Test: blob_operation_split_rw ...passed 00:06:19.293 Test: blob_operation_split_rw_iov ...passed 00:06:19.293 Test: blob_simultaneous_operations ...[2024-11-18 04:46:42.722332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:19.293 [2024-11-18 04:46:42.722671] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.293 [2024-11-18 04:46:42.723159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:19.293 [2024-11-18 04:46:42.723488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.293 [2024-11-18 04:46:42.727041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:19.293 [2024-11-18 04:46:42.727417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.293 [2024-11-18 04:46:42.727634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:19.293 [2024-11-18 04:46:42.727691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.293 passed 00:06:19.293 Test: blob_persist_test ...passed 00:06:19.293 Test: blob_decouple_snapshot ...passed 00:06:19.552 Test: blob_seek_io_unit ...passed 00:06:19.552 Test: blob_nested_freezes ...passed 00:06:19.552 Suite: blob_blob_copy_noextent 00:06:19.552 Test: blob_write ...passed 00:06:19.552 Test: blob_read ...passed 00:06:19.552 Test: blob_rw_verify ...passed 00:06:19.552 Test: blob_rw_verify_iov_nomem ...passed 00:06:19.552 Test: blob_rw_iov_read_only ...passed 00:06:19.552 Test: blob_xattr ...passed 00:06:19.552 Test: blob_dirty_shutdown ...passed 00:06:19.552 Test: blob_is_degraded ...passed 00:06:19.552 Suite: blob_esnap_bs_copy_noextent 00:06:19.552 Test: blob_esnap_create ...passed 00:06:19.552 Test: blob_esnap_thread_add_remove ...passed 00:06:19.811 Test: blob_esnap_clone_snapshot ...passed 00:06:19.811 Test: blob_esnap_clone_inflate ...passed 00:06:19.811 Test: blob_esnap_clone_decouple ...passed 00:06:19.811 Test: blob_esnap_clone_reload ...passed 00:06:19.811 Test: blob_esnap_hotplug ...passed 00:06:19.811 Suite: blob_copy_extent 00:06:19.811 Test: blob_init ...[2024-11-18 04:46:43.164693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:19.811 passed 00:06:19.811 Test: blob_thin_provision ...passed 00:06:19.811 Test: blob_read_only ...passed 00:06:19.811 Test: bs_load ...[2024-11-18 04:46:43.192759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:19.811 passed 00:06:19.811 Test: bs_load_custom_cluster_size ...passed 00:06:19.811 Test: bs_load_after_failed_grow ...passed 00:06:19.811 Test: bs_cluster_sz ...[2024-11-18 04:46:43.208946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:19.811 [2024-11-18 04:46:43.209173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:19.811 [2024-11-18 04:46:43.209333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:19.811 passed 00:06:19.811 Test: bs_resize_md ...passed 00:06:19.811 Test: bs_destroy ...passed 00:06:19.811 Test: bs_type ...passed 00:06:19.811 Test: bs_super_block ...passed 00:06:19.811 Test: bs_test_recover_cluster_count ...passed 00:06:19.811 Test: bs_grow_live ...passed 00:06:19.811 Test: bs_grow_live_no_space ...passed 00:06:19.811 Test: bs_test_grow ...passed 00:06:19.811 Test: blob_serialize_test ...passed 00:06:19.811 Test: super_block_crc ...passed 00:06:19.811 Test: blob_thin_prov_write_count_io ...passed 00:06:19.811 Test: bs_load_iter_test ...passed 00:06:19.811 Test: blob_relations ...[2024-11-18 04:46:43.322232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:19.811 [2024-11-18 04:46:43.322337] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.811 [2024-11-18 04:46:43.323903] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:19.811 [2024-11-18 04:46:43.323964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.811 passed 00:06:20.070 Test: blob_relations2 ...[2024-11-18 04:46:43.336949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:20.070 [2024-11-18 04:46:43.337031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.070 [2024-11-18 04:46:43.337070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:20.070 [2024-11-18 04:46:43.337087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.070 [2024-11-18 04:46:43.339576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:20.070 [2024-11-18 04:46:43.339646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.070 [2024-11-18 04:46:43.340513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:20.071 [2024-11-18 04:46:43.340581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.071 passed 00:06:20.071 Test: blob_relations3 ...passed 00:06:20.071 Test: blobstore_clean_power_failure ...passed 00:06:20.071 Test: blob_delete_snapshot_power_failure ...[2024-11-18 04:46:43.453612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:20.071 [2024-11-18 04:46:43.466579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:20.071 [2024-11-18 04:46:43.475988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:20.071 [2024-11-18 04:46:43.476084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:20.071 [2024-11-18 04:46:43.476114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.071 [2024-11-18 04:46:43.485366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:20.071 [2024-11-18 04:46:43.485450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:20.071 [2024-11-18 04:46:43.485484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:20.071 [2024-11-18 04:46:43.485512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.071 [2024-11-18 04:46:43.494864] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:20.071 [2024-11-18 04:46:43.494959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:20.071 [2024-11-18 04:46:43.494983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:20.071 [2024-11-18 04:46:43.495007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.071 [2024-11-18 04:46:43.504386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:20.071 [2024-11-18 04:46:43.504496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.071 [2024-11-18 04:46:43.513845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:20.071 [2024-11-18 04:46:43.513967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.071 [2024-11-18 04:46:43.523642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:20.071 [2024-11-18 04:46:43.523744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.071 passed 00:06:20.071 Test: blob_create_snapshot_power_failure ...[2024-11-18 04:46:43.548092] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:20.071 [2024-11-18 04:46:43.556738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:20.071 [2024-11-18 04:46:43.575071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:20.071 [2024-11-18 04:46:43.585088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:20.330 passed 00:06:20.330 Test: blob_io_unit ...passed 00:06:20.330 Test: blob_io_unit_compatibility ...passed 00:06:20.330 Test: blob_ext_md_pages ...passed 00:06:20.330 Test: blob_esnap_io_4096_4096 ...passed 00:06:20.330 Test: blob_esnap_io_512_512 ...passed 00:06:20.330 Test: blob_esnap_io_4096_512 ...passed 00:06:20.330 Test: blob_esnap_io_512_4096 ...passed 00:06:20.330 Suite: blob_bs_copy_extent 00:06:20.330 Test: blob_open ...passed 00:06:20.330 Test: blob_create ...[2024-11-18 04:46:43.769293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:20.330 passed 00:06:20.330 Test: blob_create_loop ...passed 00:06:20.589 Test: blob_create_fail ...[2024-11-18 04:46:43.863610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:20.589 passed 00:06:20.589 Test: blob_create_internal ...passed 00:06:20.589 Test: blob_create_zero_extent ...passed 00:06:20.589 Test: blob_snapshot ...passed 00:06:20.589 Test: blob_clone ...passed 00:06:20.589 Test: blob_inflate ...[2024-11-18 04:46:43.987549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:20.589 passed 00:06:20.589 Test: blob_delete ...passed 00:06:20.589 Test: blob_resize_test ...[2024-11-18 04:46:44.035449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:20.589 passed 00:06:20.589 Test: channel_ops ...passed 00:06:20.589 Test: blob_super ...passed 00:06:20.589 Test: blob_rw_verify_iov ...passed 00:06:20.849 Test: blob_unmap ...passed 00:06:20.849 Test: blob_iter ...passed 00:06:20.849 Test: blob_parse_md ...passed 00:06:20.849 Test: bs_load_pending_removal ...passed 00:06:20.849 Test: bs_unload ...[2024-11-18 04:46:44.206478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:20.849 passed 00:06:20.849 Test: bs_usable_clusters ...passed 00:06:20.849 Test: blob_crc ...[2024-11-18 04:46:44.253317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:20.849 [2024-11-18 04:46:44.253459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:20.849 passed 00:06:20.849 Test: blob_flags ...passed 00:06:20.849 Test: bs_version ...passed 00:06:20.849 Test: blob_set_xattrs_test ...[2024-11-18 04:46:44.327072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:20.849 [2024-11-18 04:46:44.327158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:20.849 passed 00:06:21.107 Test: blob_thin_prov_alloc ...passed 00:06:21.107 Test: blob_insert_cluster_msg_test ...passed 00:06:21.107 Test: blob_thin_prov_rw ...passed 00:06:21.107 Test: blob_thin_prov_rle ...passed 00:06:21.107 Test: blob_thin_prov_rw_iov ...passed 00:06:21.107 Test: blob_snapshot_rw ...passed 00:06:21.107 Test: blob_snapshot_rw_iov ...passed 00:06:21.366 Test: blob_inflate_rw ...passed 00:06:21.366 Test: blob_snapshot_freeze_io ...passed 00:06:21.625 Test: blob_operation_split_rw ...passed 00:06:21.625 Test: blob_operation_split_rw_iov ...passed 00:06:21.625 Test: blob_simultaneous_operations ...[2024-11-18 04:46:45.076153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:21.625 [2024-11-18 04:46:45.076316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.625 [2024-11-18 04:46:45.076690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:21.625 [2024-11-18 04:46:45.076716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.625 [2024-11-18 04:46:45.079284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:21.625 [2024-11-18 04:46:45.079354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.625 [2024-11-18 04:46:45.079437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:21.625 [2024-11-18 04:46:45.079456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.625 passed 00:06:21.625 Test: blob_persist_test ...passed 00:06:21.625 Test: blob_decouple_snapshot ...passed 00:06:21.884 Test: blob_seek_io_unit ...passed 00:06:21.884 Test: blob_nested_freezes ...passed 00:06:21.884 Suite: blob_blob_copy_extent 00:06:21.884 Test: blob_write ...passed 00:06:21.884 Test: blob_read ...passed 00:06:21.884 Test: blob_rw_verify ...passed 00:06:21.884 Test: blob_rw_verify_iov_nomem ...passed 00:06:21.884 Test: blob_rw_iov_read_only ...passed 00:06:21.884 Test: blob_xattr ...passed 00:06:21.884 Test: blob_dirty_shutdown ...passed 00:06:21.884 Test: blob_is_degraded ...passed 00:06:21.884 Suite: blob_esnap_bs_copy_extent 00:06:21.884 Test: blob_esnap_create ...passed 00:06:22.143 Test: blob_esnap_thread_add_remove ...passed 00:06:22.143 Test: blob_esnap_clone_snapshot ...passed 00:06:22.143 Test: blob_esnap_clone_inflate ...passed 00:06:22.143 Test: blob_esnap_clone_decouple ...passed 00:06:22.143 Test: blob_esnap_clone_reload ...passed 00:06:22.143 Test: blob_esnap_hotplug ...passed 00:06:22.143 00:06:22.143 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.143 suites 16 16 n/a 0 0 00:06:22.143 tests 348 348 348 0 0 00:06:22.143 asserts 92605 92605 92605 0 n/a 00:06:22.143 00:06:22.143 Elapsed time = 9.208 seconds 00:06:22.143 04:46:45 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:06:22.143 00:06:22.143 00:06:22.143 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.143 http://cunit.sourceforge.net/ 00:06:22.143 00:06:22.143 00:06:22.143 Suite: blob_bdev 00:06:22.143 Test: create_bs_dev ...passed 00:06:22.143 Test: create_bs_dev_ro ...passed[2024-11-18 04:46:45.647181] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:06:22.143 00:06:22.143 Test: create_bs_dev_rw ...passed 00:06:22.143 Test: claim_bs_dev ...passed 00:06:22.143 Test: claim_bs_dev_ro ...[2024-11-18 04:46:45.647499] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:06:22.143 passed 00:06:22.143 Test: deferred_destroy_refs ...passed 00:06:22.143 Test: deferred_destroy_channels ...passed 00:06:22.143 Test: deferred_destroy_threads ...passed 00:06:22.143 00:06:22.143 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.143 suites 1 1 n/a 0 0 00:06:22.143 tests 8 8 8 0 0 00:06:22.143 asserts 119 119 119 0 n/a 00:06:22.143 00:06:22.143 Elapsed time = 0.001 seconds 00:06:22.143 04:46:45 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:06:22.403 00:06:22.403 00:06:22.403 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.403 http://cunit.sourceforge.net/ 00:06:22.403 00:06:22.403 00:06:22.403 Suite: tree 00:06:22.403 Test: blobfs_tree_op_test ...passed 00:06:22.403 00:06:22.403 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.403 suites 1 1 n/a 0 0 00:06:22.403 tests 1 1 1 0 0 00:06:22.403 asserts 27 27 27 0 n/a 00:06:22.403 00:06:22.403 Elapsed time = 0.000 seconds 00:06:22.403 04:46:45 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:06:22.403 00:06:22.403 00:06:22.403 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.403 http://cunit.sourceforge.net/ 00:06:22.403 00:06:22.403 00:06:22.403 Suite: blobfs_async_ut 00:06:22.403 Test: fs_init ...passed 00:06:22.403 Test: fs_open ...passed 00:06:22.403 Test: fs_create ...passed 00:06:22.403 Test: fs_truncate ...passed 00:06:22.403 Test: fs_rename ...[2024-11-18 04:46:45.817856] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:06:22.403 passed 00:06:22.403 Test: fs_rw_async ...passed 00:06:22.403 Test: fs_writev_readv_async ...passed 00:06:22.403 Test: tree_find_buffer_ut ...passed 00:06:22.403 Test: channel_ops ...passed 00:06:22.403 Test: channel_ops_sync ...passed 00:06:22.403 00:06:22.403 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.403 suites 1 1 n/a 0 0 00:06:22.403 tests 10 10 10 0 0 00:06:22.403 asserts 292 292 292 0 n/a 00:06:22.403 00:06:22.403 Elapsed time = 0.152 seconds 00:06:22.403 04:46:45 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:06:22.403 00:06:22.403 00:06:22.403 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.403 http://cunit.sourceforge.net/ 00:06:22.403 00:06:22.403 00:06:22.403 Suite: blobfs_sync_ut 00:06:22.663 Test: cache_read_after_write ...[2024-11-18 04:46:45.963531] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:06:22.663 passed 00:06:22.663 Test: file_length ...passed 00:06:22.663 Test: append_write_to_extend_blob ...passed 00:06:22.663 Test: partial_buffer ...passed 00:06:22.663 Test: cache_write_null_buffer ...passed 00:06:22.663 Test: fs_create_sync ...passed 00:06:22.663 Test: fs_rename_sync ...passed 00:06:22.663 Test: cache_append_no_cache ...passed 00:06:22.663 Test: fs_delete_file_without_close ...passed 00:06:22.663 00:06:22.663 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.663 suites 1 1 n/a 0 0 00:06:22.663 tests 9 9 9 0 0 00:06:22.663 asserts 345 345 345 0 n/a 00:06:22.663 00:06:22.663 Elapsed time = 0.265 seconds 00:06:22.663 04:46:46 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:06:22.663 00:06:22.663 00:06:22.663 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.663 http://cunit.sourceforge.net/ 00:06:22.663 00:06:22.663 00:06:22.663 Suite: blobfs_bdev_ut 00:06:22.663 Test: spdk_blobfs_bdev_detect_test ...passed 00:06:22.663 Test: spdk_blobfs_bdev_create_test ...passed 00:06:22.663 Test: spdk_blobfs_bdev_mount_test ...passed 00:06:22.663 00:06:22.663 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.664 suites 1 1 n/a 0 0 00:06:22.664 tests 3 3 3 0 0 00:06:22.664 asserts 9 9 9 0 n/a 00:06:22.664 00:06:22.664 Elapsed time = 0.000 seconds 00:06:22.664 [2024-11-18 04:46:46.095360] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:22.664 [2024-11-18 04:46:46.095628] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:22.664 00:06:22.664 real 0m9.834s 00:06:22.664 user 0m9.241s 00:06:22.664 sys 0m0.693s 00:06:22.664 04:46:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.664 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:22.664 ************************************ 00:06:22.664 END TEST unittest_blob_blobfs 00:06:22.664 ************************************ 00:06:22.664 04:46:46 -- unit/unittest.sh@208 -- # run_test unittest_event unittest_event 00:06:22.664 04:46:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.664 04:46:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.664 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:22.664 ************************************ 00:06:22.664 START TEST unittest_event 00:06:22.664 ************************************ 00:06:22.664 04:46:46 -- common/autotest_common.sh@1114 -- # unittest_event 00:06:22.664 04:46:46 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:06:22.664 00:06:22.664 00:06:22.664 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.664 http://cunit.sourceforge.net/ 00:06:22.664 00:06:22.664 00:06:22.664 Suite: app_suite 00:06:22.664 Test: test_spdk_app_parse_args ...app_ut [options] 00:06:22.664 options: 00:06:22.664 -c, --config JSON config file (default none) 00:06:22.664 --json JSON config file (default none) 00:06:22.664 --json-ignore-init-errors 00:06:22.664 don't exit on invalid config entry 00:06:22.664 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:22.664 -g, --single-file-segments 00:06:22.664 force creating just one hugetlbfs file 00:06:22.664 -h, --help show this usage 00:06:22.664 -i, --shm-id shared memory ID (optional) 00:06:22.664 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:22.664 --lcores lcore to CPU mapping list. The list is in the format: 00:06:22.664 [<,lcores[@CPUs]>...] 00:06:22.664 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:22.664 Within the group, '-' is used for range separator, 00:06:22.664 ',' is used for single number separator. 00:06:22.664 '( )' can be omitted for single element group, 00:06:22.664 '@' can be omitted if cpus and lcores have the same value 00:06:22.664 -n, --mem-channels channel number of memory channels used for DPDK 00:06:22.664 -p, --main-core main (primary) core for DPDK 00:06:22.664 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:22.664 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:22.664 --disable-cpumask-locks Disable CPU core lock files. 00:06:22.664 --silence-noticelog disable notice level logging to stderr 00:06:22.664 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:22.664 -u, --no-pci disable PCI access 00:06:22.664 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:22.664 --max-delay maximum reactor delay (in microseconds) 00:06:22.664 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:22.664 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:22.664 -R, --huge-unlink unlink huge files after initialization 00:06:22.664 -v, --version print SPDK version 00:06:22.664 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:22.664 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:22.664 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:22.664 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:22.664 Tracepoints vary in size and can use more than one trace entry. 00:06:22.664 --rpcs-allowed comma-separated list of permitted RPCS 00:06:22.664 --env-context Opaque context for use of the env implementation 00:06:22.664 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:22.664 --no-huge run without using hugepages 00:06:22.664 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:22.664 -e, --tpoint-group [:] 00:06:22.664 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:22.664 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:22.664 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:22.664 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:22.664 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:22.664 app_ut [options] 00:06:22.664 options: 00:06:22.664 -c, --config JSON config file (default none) 00:06:22.664 --json JSON config file (default none) 00:06:22.664 --json-ignore-init-errors 00:06:22.664 don't exit on invalid config entry 00:06:22.664 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:22.664 -g, --single-file-segments 00:06:22.664 force creating just one hugetlbfs file 00:06:22.664 -h, --help show this usage 00:06:22.664 -i, --shm-id shared memory ID (optional) 00:06:22.664 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:22.664 --lcores lcore to CPU mapping list. The list is in the format: 00:06:22.664 [<,lcores[@CPUs]>...] 00:06:22.664 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:22.664 Within the group, '-' is used for range separator, 00:06:22.664 ',' is used for single number separator. 00:06:22.664 '( )' can be omitted for single element group, 00:06:22.664 '@' can be omitted if cpus and lcores have the same value 00:06:22.664 -n, --mem-channels channel number of memory channels used for DPDK 00:06:22.664 -p, --main-core main (primary) core for DPDK 00:06:22.664 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:22.664 app_ut: invalid option -- 'z' 00:06:22.664 app_ut: unrecognized option '--test-long-opt' 00:06:22.664 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:22.664 --disable-cpumask-locks Disable CPU core lock files. 00:06:22.664 --silence-noticelog disable notice level logging to stderr 00:06:22.664 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:22.664 -u, --no-pci disable PCI access 00:06:22.664 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:22.664 --max-delay maximum reactor delay (in microseconds) 00:06:22.664 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:22.664 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:22.664 -R, --huge-unlink unlink huge files after initialization 00:06:22.664 -v, --version print SPDK version 00:06:22.664 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:22.664 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:22.664 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:22.664 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:22.664 Tracepoints vary in size and can use more than one trace entry. 00:06:22.664 --rpcs-allowed comma-separated list of permitted RPCS 00:06:22.664 --env-context Opaque context for use of the env implementation 00:06:22.664 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:22.664 --no-huge run without using hugepages 00:06:22.664 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:22.664 -e, --tpoint-group [:] 00:06:22.664 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:22.664 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:22.664 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:22.664 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:22.664 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:22.664 app_ut [options] 00:06:22.664 options: 00:06:22.664 -c, --config JSON config file (default none) 00:06:22.664 --json JSON config file (default none) 00:06:22.664 --json-ignore-init-errors 00:06:22.664 don't exit on invalid config entry 00:06:22.664 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:22.664 -g, --single-file-segments 00:06:22.664 force creating just one hugetlbfs file 00:06:22.664 -h, --help show this usage 00:06:22.664 -i, --shm-id shared memory ID (optional) 00:06:22.664 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:22.664 --lcores lcore to CPU mapping list. The list is in the format: 00:06:22.664 [<,lcores[@CPUs]>...] 00:06:22.664 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:22.664 Within the group, '-' is used for range separator, 00:06:22.664 ',' is used for single number separator. 00:06:22.665 '( )' can be omitted for single element group, 00:06:22.665 '@' can be omitted if cpus and lcores have the same value 00:06:22.665 -n, --mem-channels channel number of memory channels used for DPDK 00:06:22.665 -p, --main-core main (primary) core for DPDK 00:06:22.665 [2024-11-18 04:46:46.176957] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:06:22.665 [2024-11-18 04:46:46.177235] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:06:22.665 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:22.665 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:22.665 --disable-cpumask-locks Disable CPU core lock files. 00:06:22.665 --silence-noticelog disable notice level logging to stderr 00:06:22.665 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:22.665 -u, --no-pci disable PCI access 00:06:22.665 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:22.665 --max-delay maximum reactor delay (in microseconds) 00:06:22.665 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:22.665 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:22.665 -R, --huge-unlink unlink huge files after initialization 00:06:22.665 -v, --version print SPDK version 00:06:22.665 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:22.665 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:22.665 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:22.665 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:22.665 Tracepoints vary in size and can use more than one trace entry. 00:06:22.665 --rpcs-allowed comma-separated list of permitted RPCS 00:06:22.665 --env-context Opaque context for use of the env implementation 00:06:22.665 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:22.665 --no-huge run without using hugepages 00:06:22.665 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:22.665 -e, --tpoint-group [:] 00:06:22.665 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:22.665 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:22.665 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:22.665 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:22.665 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:22.665 passed 00:06:22.665 00:06:22.665 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.665 suites 1 1 n/a 0 0 00:06:22.665 tests 1 1 1 0 0 00:06:22.665 asserts 8 8 8 0 n/a 00:06:22.665 00:06:22.665 Elapsed time = 0.001 seconds 00:06:22.665 [2024-11-18 04:46:46.177413] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:06:22.924 04:46:46 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:06:22.924 00:06:22.924 00:06:22.924 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.924 http://cunit.sourceforge.net/ 00:06:22.924 00:06:22.924 00:06:22.924 Suite: app_suite 00:06:22.924 Test: test_create_reactor ...passed 00:06:22.924 Test: test_init_reactors ...passed 00:06:22.924 Test: test_event_call ...passed 00:06:22.924 Test: test_schedule_thread ...passed 00:06:22.924 Test: test_reschedule_thread ...passed 00:06:22.924 Test: test_bind_thread ...passed 00:06:22.924 Test: test_for_each_reactor ...passed 00:06:22.924 Test: test_reactor_stats ...passed 00:06:22.924 Test: test_scheduler ...passed 00:06:22.924 Test: test_governor ...passed 00:06:22.924 00:06:22.924 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.924 suites 1 1 n/a 0 0 00:06:22.924 tests 10 10 10 0 0 00:06:22.924 asserts 344 344 344 0 n/a 00:06:22.924 00:06:22.924 Elapsed time = 0.024 seconds 00:06:22.924 00:06:22.924 real 0m0.096s 00:06:22.924 user 0m0.058s 00:06:22.924 sys 0m0.039s 00:06:22.924 ************************************ 00:06:22.924 END TEST unittest_event 00:06:22.924 ************************************ 00:06:22.924 04:46:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.924 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:22.924 04:46:46 -- unit/unittest.sh@209 -- # uname -s 00:06:22.924 04:46:46 -- unit/unittest.sh@209 -- # '[' Linux = Linux ']' 00:06:22.924 04:46:46 -- unit/unittest.sh@210 -- # run_test unittest_ftl unittest_ftl 00:06:22.924 04:46:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.924 04:46:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.924 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:22.924 ************************************ 00:06:22.924 START TEST unittest_ftl 00:06:22.924 ************************************ 00:06:22.924 04:46:46 -- common/autotest_common.sh@1114 -- # unittest_ftl 00:06:22.924 04:46:46 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:06:22.924 00:06:22.924 00:06:22.924 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.924 http://cunit.sourceforge.net/ 00:06:22.924 00:06:22.924 00:06:22.924 Suite: ftl_band_suite 00:06:22.924 Test: test_band_block_offset_from_addr_base ...passed 00:06:22.924 Test: test_band_block_offset_from_addr_offset ...passed 00:06:22.924 Test: test_band_addr_from_block_offset ...passed 00:06:23.184 Test: test_band_set_addr ...passed 00:06:23.184 Test: test_invalidate_addr ...passed 00:06:23.184 Test: test_next_xfer_addr ...passed 00:06:23.184 00:06:23.184 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.184 suites 1 1 n/a 0 0 00:06:23.184 tests 6 6 6 0 0 00:06:23.184 asserts 30356 30356 30356 0 n/a 00:06:23.184 00:06:23.184 Elapsed time = 0.190 seconds 00:06:23.184 04:46:46 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:06:23.184 00:06:23.184 00:06:23.184 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.184 http://cunit.sourceforge.net/ 00:06:23.184 00:06:23.184 00:06:23.184 Suite: ftl_bitmap 00:06:23.184 Test: test_ftl_bitmap_create ...passed 00:06:23.184 Test: test_ftl_bitmap_get ...[2024-11-18 04:46:46.603760] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:06:23.184 [2024-11-18 04:46:46.603979] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:06:23.184 passed 00:06:23.184 Test: test_ftl_bitmap_set ...passed 00:06:23.184 Test: test_ftl_bitmap_clear ...passed 00:06:23.184 Test: test_ftl_bitmap_find_first_set ...passed 00:06:23.184 Test: test_ftl_bitmap_find_first_clear ...passed 00:06:23.184 Test: test_ftl_bitmap_count_set ...passed 00:06:23.184 00:06:23.184 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.184 suites 1 1 n/a 0 0 00:06:23.184 tests 7 7 7 0 0 00:06:23.184 asserts 137 137 137 0 n/a 00:06:23.184 00:06:23.184 Elapsed time = 0.001 seconds 00:06:23.184 04:46:46 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:06:23.184 00:06:23.184 00:06:23.184 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.184 http://cunit.sourceforge.net/ 00:06:23.184 00:06:23.184 00:06:23.184 Suite: ftl_io_suite 00:06:23.184 Test: test_completion ...passed 00:06:23.184 Test: test_multiple_ios ...passed 00:06:23.184 00:06:23.184 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.184 suites 1 1 n/a 0 0 00:06:23.184 tests 2 2 2 0 0 00:06:23.184 asserts 47 47 47 0 n/a 00:06:23.184 00:06:23.184 Elapsed time = 0.004 seconds 00:06:23.184 04:46:46 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:06:23.184 00:06:23.184 00:06:23.184 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.184 http://cunit.sourceforge.net/ 00:06:23.184 00:06:23.184 00:06:23.184 Suite: ftl_mngt 00:06:23.184 Test: test_next_step ...passed 00:06:23.184 Test: test_continue_step ...passed 00:06:23.184 Test: test_get_func_and_step_cntx_alloc ...passed 00:06:23.184 Test: test_fail_step ...passed 00:06:23.184 Test: test_mngt_call_and_call_rollback ...passed 00:06:23.184 Test: test_nested_process_failure ...passed 00:06:23.184 00:06:23.184 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.184 suites 1 1 n/a 0 0 00:06:23.184 tests 6 6 6 0 0 00:06:23.184 asserts 176 176 176 0 n/a 00:06:23.184 00:06:23.184 Elapsed time = 0.002 seconds 00:06:23.184 04:46:46 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:06:23.184 00:06:23.184 00:06:23.184 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.184 http://cunit.sourceforge.net/ 00:06:23.184 00:06:23.184 00:06:23.185 Suite: ftl_mempool 00:06:23.185 Test: test_ftl_mempool_create ...passed 00:06:23.185 Test: test_ftl_mempool_get_put ...passed 00:06:23.185 00:06:23.185 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.185 suites 1 1 n/a 0 0 00:06:23.185 tests 2 2 2 0 0 00:06:23.185 asserts 36 36 36 0 n/a 00:06:23.185 00:06:23.185 Elapsed time = 0.000 seconds 00:06:23.444 04:46:46 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:06:23.444 00:06:23.444 00:06:23.444 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.444 http://cunit.sourceforge.net/ 00:06:23.444 00:06:23.444 00:06:23.444 Suite: ftl_addr64_suite 00:06:23.444 Test: test_addr_cached ...passed 00:06:23.444 00:06:23.444 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.444 suites 1 1 n/a 0 0 00:06:23.444 tests 1 1 1 0 0 00:06:23.444 asserts 1536 1536 1536 0 n/a 00:06:23.444 00:06:23.444 Elapsed time = 0.001 seconds 00:06:23.444 04:46:46 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:06:23.444 00:06:23.444 00:06:23.444 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.444 http://cunit.sourceforge.net/ 00:06:23.444 00:06:23.444 00:06:23.444 Suite: ftl_sb 00:06:23.444 Test: test_sb_crc_v2 ...passed 00:06:23.444 Test: test_sb_crc_v3 ...passed 00:06:23.444 Test: test_sb_v3_md_layout ...[2024-11-18 04:46:46.745579] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:06:23.444 [2024-11-18 04:46:46.745865] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:23.444 [2024-11-18 04:46:46.745922] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:23.444 [2024-11-18 04:46:46.745954] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:23.444 [2024-11-18 04:46:46.745988] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:23.444 [2024-11-18 04:46:46.746018] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:06:23.444 [2024-11-18 04:46:46.746062] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:23.444 [2024-11-18 04:46:46.746104] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:23.444 [2024-11-18 04:46:46.746184] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:23.444 passed 00:06:23.445 Test: test_sb_v5_md_layout ...[2024-11-18 04:46:46.746251] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:23.445 [2024-11-18 04:46:46.746303] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:23.445 passed 00:06:23.445 00:06:23.445 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.445 suites 1 1 n/a 0 0 00:06:23.445 tests 4 4 4 0 0 00:06:23.445 asserts 148 148 148 0 n/a 00:06:23.445 00:06:23.445 Elapsed time = 0.002 seconds 00:06:23.445 04:46:46 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:06:23.445 00:06:23.445 00:06:23.445 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.445 http://cunit.sourceforge.net/ 00:06:23.445 00:06:23.445 00:06:23.445 Suite: ftl_layout_upgrade 00:06:23.445 Test: test_l2p_upgrade ...passed 00:06:23.445 00:06:23.445 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.445 suites 1 1 n/a 0 0 00:06:23.445 tests 1 1 1 0 0 00:06:23.445 asserts 140 140 140 0 n/a 00:06:23.445 00:06:23.445 Elapsed time = 0.001 seconds 00:06:23.445 00:06:23.445 real 0m0.479s 00:06:23.445 user 0m0.211s 00:06:23.445 sys 0m0.268s 00:06:23.445 04:46:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.445 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.445 ************************************ 00:06:23.445 END TEST unittest_ftl 00:06:23.445 ************************************ 00:06:23.445 04:46:46 -- unit/unittest.sh@213 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:23.445 04:46:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.445 04:46:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.445 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.445 ************************************ 00:06:23.445 START TEST unittest_accel 00:06:23.445 ************************************ 00:06:23.445 04:46:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:23.445 00:06:23.445 00:06:23.445 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.445 http://cunit.sourceforge.net/ 00:06:23.445 00:06:23.445 00:06:23.445 Suite: accel_sequence 00:06:23.445 Test: test_sequence_fill_copy ...passed 00:06:23.445 Test: test_sequence_abort ...passed 00:06:23.445 Test: test_sequence_append_error ...passed 00:06:23.445 Test: test_sequence_completion_error ...[2024-11-18 04:46:46.869758] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7e479b7287c0 00:06:23.445 passed 00:06:23.445 Test: test_sequence_decompress ...[2024-11-18 04:46:46.870004] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7e479b7287c0 00:06:23.445 [2024-11-18 04:46:46.870090] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7e479b7287c0 00:06:23.445 [2024-11-18 04:46:46.870151] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7e479b7287c0 00:06:23.445 passed 00:06:23.445 Test: test_sequence_reverse ...passed 00:06:23.445 Test: test_sequence_copy_elision ...passed 00:06:23.445 Test: test_sequence_accel_buffers ...passed 00:06:23.445 Test: test_sequence_memory_domain ...[2024-11-18 04:46:46.882887] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:06:23.445 passed 00:06:23.445 Test: test_sequence_module_memory_domain ...[2024-11-18 04:46:46.883057] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:06:23.445 passed 00:06:23.445 Test: test_sequence_crypto ...passed 00:06:23.445 Test: test_sequence_driver ...[2024-11-18 04:46:46.890273] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7e47989aa7c0 using driver: ut 00:06:23.445 passed 00:06:23.445 Test: test_sequence_same_iovs ...[2024-11-18 04:46:46.890376] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7e47989aa7c0 through driver: ut 00:06:23.445 passed 00:06:23.445 Test: test_sequence_crc32 ...passed 00:06:23.445 Suite: accel 00:06:23.445 Test: test_spdk_accel_task_complete ...passed 00:06:23.445 Test: test_get_task ...passed 00:06:23.445 Test: test_spdk_accel_submit_copy ...passed 00:06:23.445 Test: test_spdk_accel_submit_dualcast ...passed 00:06:23.445 Test: test_spdk_accel_submit_compare ...passed 00:06:23.445 Test: test_spdk_accel_submit_fill ...passed 00:06:23.445 Test: test_spdk_accel_submit_crc32c ...passed 00:06:23.445 Test: test_spdk_accel_submit_crc32cv ...passed 00:06:23.445 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:06:23.445 Test: test_spdk_accel_submit_xor ...passed 00:06:23.445 Test: test_spdk_accel_module_find_by_name ...passed 00:06:23.445 Test: test_spdk_accel_module_register ...[2024-11-18 04:46:46.895790] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:23.445 [2024-11-18 04:46:46.895850] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:23.445 passed 00:06:23.445 00:06:23.445 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.445 suites 2 2 n/a 0 0 00:06:23.445 tests 26 26 26 0 0 00:06:23.445 asserts 831 831 831 0 n/a 00:06:23.445 00:06:23.445 Elapsed time = 0.039 seconds 00:06:23.445 00:06:23.445 real 0m0.077s 00:06:23.445 user 0m0.050s 00:06:23.445 sys 0m0.028s 00:06:23.445 04:46:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.445 ************************************ 00:06:23.445 END TEST unittest_accel 00:06:23.445 ************************************ 00:06:23.445 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.445 04:46:46 -- unit/unittest.sh@214 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:23.445 04:46:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.445 04:46:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.445 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.705 ************************************ 00:06:23.705 START TEST unittest_ioat 00:06:23.705 ************************************ 00:06:23.705 04:46:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:23.705 00:06:23.705 00:06:23.705 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.705 http://cunit.sourceforge.net/ 00:06:23.705 00:06:23.705 00:06:23.705 Suite: ioat 00:06:23.705 Test: ioat_state_check ...passed 00:06:23.705 00:06:23.705 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.705 suites 1 1 n/a 0 0 00:06:23.705 tests 1 1 1 0 0 00:06:23.705 asserts 32 32 32 0 n/a 00:06:23.705 00:06:23.705 Elapsed time = 0.000 seconds 00:06:23.705 00:06:23.705 real 0m0.027s 00:06:23.705 user 0m0.011s 00:06:23.705 sys 0m0.017s 00:06:23.705 04:46:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.705 ************************************ 00:06:23.705 END TEST unittest_ioat 00:06:23.705 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.705 ************************************ 00:06:23.705 04:46:47 -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:23.705 04:46:47 -- unit/unittest.sh@216 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:23.705 04:46:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.705 04:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.705 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:23.705 ************************************ 00:06:23.705 START TEST unittest_idxd_user 00:06:23.705 ************************************ 00:06:23.705 04:46:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:23.705 00:06:23.705 00:06:23.705 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.705 http://cunit.sourceforge.net/ 00:06:23.705 00:06:23.705 00:06:23.705 Suite: idxd_user 00:06:23.705 Test: test_idxd_wait_cmd ...passed 00:06:23.705 Test: test_idxd_reset_dev ...passed 00:06:23.705 Test: test_idxd_group_config ...passed 00:06:23.705 Test: test_idxd_wq_config ...passed[2024-11-18 04:46:47.068943] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:23.705 [2024-11-18 04:46:47.069103] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:06:23.705 [2024-11-18 04:46:47.069234] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:23.705 [2024-11-18 04:46:47.069271] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:06:23.705 00:06:23.705 00:06:23.705 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.705 suites 1 1 n/a 0 0 00:06:23.705 tests 4 4 4 0 0 00:06:23.705 asserts 20 20 20 0 n/a 00:06:23.705 00:06:23.705 Elapsed time = 0.001 seconds 00:06:23.705 00:06:23.705 real 0m0.031s 00:06:23.705 user 0m0.015s 00:06:23.705 sys 0m0.017s 00:06:23.705 04:46:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.705 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:23.705 ************************************ 00:06:23.705 END TEST unittest_idxd_user 00:06:23.705 ************************************ 00:06:23.705 04:46:47 -- unit/unittest.sh@218 -- # run_test unittest_iscsi unittest_iscsi 00:06:23.705 04:46:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.705 04:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.705 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:23.705 ************************************ 00:06:23.705 START TEST unittest_iscsi 00:06:23.705 ************************************ 00:06:23.705 04:46:47 -- common/autotest_common.sh@1114 -- # unittest_iscsi 00:06:23.705 04:46:47 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:06:23.705 00:06:23.705 00:06:23.705 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.705 http://cunit.sourceforge.net/ 00:06:23.705 00:06:23.705 00:06:23.705 Suite: conn_suite 00:06:23.705 Test: read_task_split_in_order_case ...passed 00:06:23.705 Test: read_task_split_reverse_order_case ...passed 00:06:23.705 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:06:23.705 Test: process_non_read_task_completion_test ...passed 00:06:23.705 Test: free_tasks_on_connection ...passed 00:06:23.705 Test: free_tasks_with_queued_datain ...passed 00:06:23.705 Test: abort_queued_datain_task_test ...passed 00:06:23.705 Test: abort_queued_datain_tasks_test ...passed 00:06:23.705 00:06:23.705 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.705 suites 1 1 n/a 0 0 00:06:23.705 tests 8 8 8 0 0 00:06:23.705 asserts 230 230 230 0 n/a 00:06:23.705 00:06:23.705 Elapsed time = 0.001 seconds 00:06:23.705 04:46:47 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:06:23.705 00:06:23.705 00:06:23.705 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.705 http://cunit.sourceforge.net/ 00:06:23.705 00:06:23.705 00:06:23.705 Suite: iscsi_suite 00:06:23.705 Test: param_negotiation_test ...passed 00:06:23.705 Test: list_negotiation_test ...passed 00:06:23.705 Test: parse_valid_test ...passed 00:06:23.705 Test: parse_invalid_test ...[2024-11-18 04:46:47.193399] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:23.705 [2024-11-18 04:46:47.193677] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:23.705 [2024-11-18 04:46:47.193733] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:06:23.705 [2024-11-18 04:46:47.193776] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:06:23.705 [2024-11-18 04:46:47.193876] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:06:23.705 [2024-11-18 04:46:47.193917] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:06:23.705 passed 00:06:23.705 00:06:23.705 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.705 suites 1 1 n/a 0 0 00:06:23.706 tests 4 4 4 0 0 00:06:23.706 asserts 161 161 161 0 n/a 00:06:23.706 00:06:23.706 Elapsed time = 0.005 seconds 00:06:23.706 [2024-11-18 04:46:47.194002] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:06:23.706 04:46:47 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:06:23.706 00:06:23.706 00:06:23.706 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.706 http://cunit.sourceforge.net/ 00:06:23.706 00:06:23.706 00:06:23.706 Suite: iscsi_target_node_suite 00:06:23.706 Test: add_lun_test_cases ...passed 00:06:23.706 Test: allow_any_allowed ...passed 00:06:23.706 Test: allow_ipv6_allowed ...passed 00:06:23.706 Test: allow_ipv6_denied ...passed 00:06:23.706 Test: allow_ipv6_invalid ...passed 00:06:23.706 Test: allow_ipv4_allowed ...passed 00:06:23.706 Test: allow_ipv4_denied ...passed 00:06:23.706 Test: allow_ipv4_invalid ...passed 00:06:23.706 Test: node_access_allowed ...passed 00:06:23.706 Test: node_access_denied_by_empty_netmask ...[2024-11-18 04:46:47.218625] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:06:23.706 [2024-11-18 04:46:47.218822] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:06:23.706 [2024-11-18 04:46:47.218872] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:23.706 [2024-11-18 04:46:47.218911] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:23.706 [2024-11-18 04:46:47.218943] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:06:23.706 passed 00:06:23.706 Test: node_access_multi_initiator_groups_cases ...passed 00:06:23.706 Test: allow_iscsi_name_multi_maps_case ...passed 00:06:23.706 Test: chap_param_test_cases ...passed 00:06:23.706 00:06:23.706 [2024-11-18 04:46:47.219558] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:06:23.706 [2024-11-18 04:46:47.219609] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:06:23.706 [2024-11-18 04:46:47.219644] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:06:23.706 [2024-11-18 04:46:47.219675] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:06:23.706 [2024-11-18 04:46:47.219688] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:06:23.706 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.706 suites 1 1 n/a 0 0 00:06:23.706 tests 13 13 13 0 0 00:06:23.706 asserts 50 50 50 0 n/a 00:06:23.706 00:06:23.706 Elapsed time = 0.001 seconds 00:06:23.966 04:46:47 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:06:23.966 00:06:23.966 00:06:23.966 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.966 http://cunit.sourceforge.net/ 00:06:23.966 00:06:23.966 00:06:23.966 Suite: iscsi_suite 00:06:23.966 Test: op_login_check_target_test ...passed 00:06:23.966 Test: op_login_session_normal_test ...[2024-11-18 04:46:47.253240] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:06:23.966 [2024-11-18 04:46:47.253496] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:23.966 [2024-11-18 04:46:47.253550] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:23.966 [2024-11-18 04:46:47.253569] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:23.966 [2024-11-18 04:46:47.253623] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:06:23.966 [2024-11-18 04:46:47.253659] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:23.966 passed 00:06:23.966 Test: maxburstlength_test ...[2024-11-18 04:46:47.253702] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:06:23.966 [2024-11-18 04:46:47.253758] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:23.966 [2024-11-18 04:46:47.254014] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:23.966 [2024-11-18 04:46:47.254091] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:06:23.966 passed 00:06:23.966 Test: underflow_for_read_transfer_test ...passed 00:06:23.966 Test: underflow_for_zero_read_transfer_test ...passed 00:06:23.966 Test: underflow_for_request_sense_test ...passed 00:06:23.966 Test: underflow_for_check_condition_test ...passed 00:06:23.966 Test: add_transfer_task_test ...passed 00:06:23.966 Test: get_transfer_task_test ...passed 00:06:23.966 Test: del_transfer_task_test ...passed 00:06:23.966 Test: clear_all_transfer_tasks_test ...passed 00:06:23.966 Test: build_iovs_test ...passed 00:06:23.966 Test: build_iovs_with_md_test ...passed 00:06:23.966 Test: pdu_hdr_op_login_test ...passed 00:06:23.966 Test: pdu_hdr_op_text_test ...[2024-11-18 04:46:47.255589] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:06:23.966 [2024-11-18 04:46:47.255685] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:06:23.966 [2024-11-18 04:46:47.255749] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:06:23.966 passed 00:06:23.966 Test: pdu_hdr_op_logout_test ...passed 00:06:23.966 Test: pdu_hdr_op_scsi_test ...[2024-11-18 04:46:47.255834] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:23.966 [2024-11-18 04:46:47.255918] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:06:23.966 [2024-11-18 04:46:47.255948] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:06:23.966 [2024-11-18 04:46:47.256011] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:06:23.966 [2024-11-18 04:46:47.256104] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:23.966 [2024-11-18 04:46:47.256143] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:23.966 [2024-11-18 04:46:47.256174] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:06:23.966 [2024-11-18 04:46:47.256254] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:23.966 [2024-11-18 04:46:47.256303] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:06:23.966 passed 00:06:23.966 Test: pdu_hdr_op_task_mgmt_test ...passed 00:06:23.966 Test: pdu_hdr_op_nopout_test ...[2024-11-18 04:46:47.256432] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:06:23.966 [2024-11-18 04:46:47.256499] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:06:23.966 [2024-11-18 04:46:47.256550] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:06:23.966 passed 00:06:23.966 Test: pdu_hdr_op_data_test ...[2024-11-18 04:46:47.256711] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:06:23.966 [2024-11-18 04:46:47.256762] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:23.966 [2024-11-18 04:46:47.256790] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:23.966 [2024-11-18 04:46:47.256807] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:06:23.966 [2024-11-18 04:46:47.256870] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:06:23.966 [2024-11-18 04:46:47.256938] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:06:23.966 [2024-11-18 04:46:47.256993] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:23.966 passed 00:06:23.966 Test: empty_text_with_cbit_test ...passed 00:06:23.966 Test: pdu_payload_read_test ...[2024-11-18 04:46:47.257018] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:06:23.966 [2024-11-18 04:46:47.257060] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:06:23.966 [2024-11-18 04:46:47.257124] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:06:23.966 [2024-11-18 04:46:47.257147] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:06:23.966 [2024-11-18 04:46:47.258996] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:06:23.966 passed 00:06:23.966 Test: data_out_pdu_sequence_test ...passed 00:06:23.966 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:06:23.966 00:06:23.966 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.966 suites 1 1 n/a 0 0 00:06:23.966 tests 24 24 24 0 0 00:06:23.966 asserts 150253 150253 150253 0 n/a 00:06:23.966 00:06:23.966 Elapsed time = 0.014 seconds 00:06:23.966 04:46:47 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:06:23.966 00:06:23.966 00:06:23.966 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.966 http://cunit.sourceforge.net/ 00:06:23.966 00:06:23.966 00:06:23.966 Suite: init_grp_suite 00:06:23.966 Test: create_initiator_group_success_case ...passed 00:06:23.966 Test: find_initiator_group_success_case ...passed 00:06:23.966 Test: register_initiator_group_twice_case ...passed 00:06:23.966 Test: add_initiator_name_success_case ...passed 00:06:23.966 Test: add_initiator_name_fail_case ...passed 00:06:23.966 Test: delete_all_initiator_names_success_case ...passed 00:06:23.966 Test: add_netmask_success_case ...passed 00:06:23.966 Test: add_netmask_fail_case ...[2024-11-18 04:46:47.305236] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:06:23.966 [2024-11-18 04:46:47.305628] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:06:23.966 passed 00:06:23.966 Test: delete_all_netmasks_success_case ...passed 00:06:23.966 Test: initiator_name_overwrite_all_to_any_case ...passed 00:06:23.966 Test: netmask_overwrite_all_to_any_case ...passed 00:06:23.966 Test: add_delete_initiator_names_case ...passed 00:06:23.966 Test: add_duplicated_initiator_names_case ...passed 00:06:23.966 Test: delete_nonexisting_initiator_names_case ...passed 00:06:23.966 Test: add_delete_netmasks_case ...passed 00:06:23.966 Test: add_duplicated_netmasks_case ...passed 00:06:23.966 Test: delete_nonexisting_netmasks_case ...passed 00:06:23.966 00:06:23.966 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.966 suites 1 1 n/a 0 0 00:06:23.966 tests 17 17 17 0 0 00:06:23.966 asserts 108 108 108 0 n/a 00:06:23.966 00:06:23.966 Elapsed time = 0.001 seconds 00:06:23.966 04:46:47 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:06:23.966 00:06:23.966 00:06:23.966 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.966 http://cunit.sourceforge.net/ 00:06:23.966 00:06:23.966 00:06:23.967 Suite: portal_grp_suite 00:06:23.967 Test: portal_create_ipv4_normal_case ...passed 00:06:23.967 Test: portal_create_ipv6_normal_case ...passed 00:06:23.967 Test: portal_create_ipv4_wildcard_case ...passed 00:06:23.967 Test: portal_create_ipv6_wildcard_case ...passed 00:06:23.967 Test: portal_create_twice_case ...passed 00:06:23.967 Test: portal_grp_register_unregister_case ...passed 00:06:23.967 Test: portal_grp_register_twice_case ...passed 00:06:23.967 Test: portal_grp_add_delete_case ...[2024-11-18 04:46:47.340058] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:06:23.967 passed 00:06:23.967 Test: portal_grp_add_delete_twice_case ...passed 00:06:23.967 00:06:23.967 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.967 suites 1 1 n/a 0 0 00:06:23.967 tests 9 9 9 0 0 00:06:23.967 asserts 44 44 44 0 n/a 00:06:23.967 00:06:23.967 Elapsed time = 0.003 seconds 00:06:23.967 00:06:23.967 real 0m0.224s 00:06:23.967 user 0m0.112s 00:06:23.967 sys 0m0.115s 00:06:23.967 ************************************ 00:06:23.967 END TEST unittest_iscsi 00:06:23.967 04:46:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.967 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:23.967 ************************************ 00:06:23.967 04:46:47 -- unit/unittest.sh@219 -- # run_test unittest_json unittest_json 00:06:23.967 04:46:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.967 04:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.967 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:23.967 ************************************ 00:06:23.967 START TEST unittest_json 00:06:23.967 ************************************ 00:06:23.967 04:46:47 -- common/autotest_common.sh@1114 -- # unittest_json 00:06:23.967 04:46:47 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:06:23.967 00:06:23.967 00:06:23.967 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.967 http://cunit.sourceforge.net/ 00:06:23.967 00:06:23.967 00:06:23.967 Suite: json 00:06:23.967 Test: test_parse_literal ...passed 00:06:23.967 Test: test_parse_string_simple ...passed 00:06:23.967 Test: test_parse_string_control_chars ...passed 00:06:23.967 Test: test_parse_string_utf8 ...passed 00:06:23.967 Test: test_parse_string_escapes_twochar ...passed 00:06:23.967 Test: test_parse_string_escapes_unicode ...passed 00:06:23.967 Test: test_parse_number ...passed 00:06:23.967 Test: test_parse_array ...passed 00:06:23.967 Test: test_parse_object ...passed 00:06:23.967 Test: test_parse_nesting ...passed 00:06:23.967 Test: test_parse_comment ...passed 00:06:23.967 00:06:23.967 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.967 suites 1 1 n/a 0 0 00:06:23.967 tests 11 11 11 0 0 00:06:23.967 asserts 1516 1516 1516 0 n/a 00:06:23.967 00:06:23.967 Elapsed time = 0.002 seconds 00:06:23.967 04:46:47 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:06:23.967 00:06:23.967 00:06:23.967 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.967 http://cunit.sourceforge.net/ 00:06:23.967 00:06:23.967 00:06:23.967 Suite: json 00:06:23.967 Test: test_strequal ...passed 00:06:23.967 Test: test_num_to_uint16 ...passed 00:06:23.967 Test: test_num_to_int32 ...passed 00:06:23.967 Test: test_num_to_uint64 ...passed 00:06:23.967 Test: test_decode_object ...passed 00:06:23.967 Test: test_decode_array ...passed 00:06:23.967 Test: test_decode_bool ...passed 00:06:23.967 Test: test_decode_uint16 ...passed 00:06:23.967 Test: test_decode_int32 ...passed 00:06:23.967 Test: test_decode_uint32 ...passed 00:06:23.967 Test: test_decode_uint64 ...passed 00:06:23.967 Test: test_decode_string ...passed 00:06:23.967 Test: test_decode_uuid ...passed 00:06:23.967 Test: test_find ...passed 00:06:23.967 Test: test_find_array ...passed 00:06:23.967 Test: test_iterating ...passed 00:06:23.967 Test: test_free_object ...passed 00:06:23.967 00:06:23.967 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.967 suites 1 1 n/a 0 0 00:06:23.967 tests 17 17 17 0 0 00:06:23.967 asserts 236 236 236 0 n/a 00:06:23.967 00:06:23.967 Elapsed time = 0.001 seconds 00:06:23.967 04:46:47 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:06:24.227 00:06:24.227 00:06:24.227 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.227 http://cunit.sourceforge.net/ 00:06:24.227 00:06:24.227 00:06:24.227 Suite: json 00:06:24.227 Test: test_write_literal ...passed 00:06:24.227 Test: test_write_string_simple ...passed 00:06:24.227 Test: test_write_string_escapes ...passed 00:06:24.227 Test: test_write_string_utf16le ...passed 00:06:24.227 Test: test_write_number_int32 ...passed 00:06:24.227 Test: test_write_number_uint32 ...passed 00:06:24.227 Test: test_write_number_uint128 ...passed 00:06:24.227 Test: test_write_string_number_uint128 ...passed 00:06:24.227 Test: test_write_number_int64 ...passed 00:06:24.227 Test: test_write_number_uint64 ...passed 00:06:24.227 Test: test_write_number_double ...passed 00:06:24.227 Test: test_write_uuid ...passed 00:06:24.227 Test: test_write_array ...passed 00:06:24.227 Test: test_write_object ...passed 00:06:24.227 Test: test_write_nesting ...passed 00:06:24.227 Test: test_write_val ...passed 00:06:24.227 00:06:24.227 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.227 suites 1 1 n/a 0 0 00:06:24.227 tests 16 16 16 0 0 00:06:24.227 asserts 918 918 918 0 n/a 00:06:24.227 00:06:24.227 Elapsed time = 0.005 seconds 00:06:24.227 04:46:47 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:06:24.227 00:06:24.227 00:06:24.227 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.227 http://cunit.sourceforge.net/ 00:06:24.227 00:06:24.227 00:06:24.227 Suite: jsonrpc 00:06:24.227 Test: test_parse_request ...passed 00:06:24.227 Test: test_parse_request_streaming ...passed 00:06:24.227 00:06:24.227 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.227 suites 1 1 n/a 0 0 00:06:24.227 tests 2 2 2 0 0 00:06:24.227 asserts 289 289 289 0 n/a 00:06:24.227 00:06:24.227 Elapsed time = 0.004 seconds 00:06:24.227 00:06:24.227 real 0m0.141s 00:06:24.227 user 0m0.073s 00:06:24.227 sys 0m0.070s 00:06:24.227 04:46:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.227 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.227 ************************************ 00:06:24.227 END TEST unittest_json 00:06:24.227 ************************************ 00:06:24.227 04:46:47 -- unit/unittest.sh@220 -- # run_test unittest_rpc unittest_rpc 00:06:24.227 04:46:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.227 04:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.227 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.227 ************************************ 00:06:24.227 START TEST unittest_rpc 00:06:24.227 ************************************ 00:06:24.227 04:46:47 -- common/autotest_common.sh@1114 -- # unittest_rpc 00:06:24.227 04:46:47 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:06:24.227 00:06:24.227 00:06:24.227 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.227 http://cunit.sourceforge.net/ 00:06:24.227 00:06:24.227 00:06:24.227 Suite: rpc 00:06:24.227 Test: test_jsonrpc_handler ...passed 00:06:24.227 Test: test_spdk_rpc_is_method_allowed ...passed 00:06:24.227 Test: test_rpc_get_methods ...passed 00:06:24.227 Test: test_rpc_spdk_get_version ...passed 00:06:24.227 Test: test_spdk_rpc_listen_close ...passed 00:06:24.227 00:06:24.227 [2024-11-18 04:46:47.621540] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:06:24.227 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.227 suites 1 1 n/a 0 0 00:06:24.227 tests 5 5 5 0 0 00:06:24.227 asserts 20 20 20 0 n/a 00:06:24.227 00:06:24.227 Elapsed time = 0.000 seconds 00:06:24.227 00:06:24.227 real 0m0.031s 00:06:24.227 user 0m0.013s 00:06:24.227 sys 0m0.018s 00:06:24.227 ************************************ 00:06:24.227 END TEST unittest_rpc 00:06:24.227 04:46:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.227 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.227 ************************************ 00:06:24.227 04:46:47 -- unit/unittest.sh@221 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:24.227 04:46:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.227 04:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.227 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.227 ************************************ 00:06:24.227 START TEST unittest_notify 00:06:24.227 ************************************ 00:06:24.227 04:46:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:24.227 00:06:24.227 00:06:24.227 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.227 http://cunit.sourceforge.net/ 00:06:24.228 00:06:24.228 00:06:24.228 Suite: app_suite 00:06:24.228 Test: notify ...passed 00:06:24.228 00:06:24.228 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.228 suites 1 1 n/a 0 0 00:06:24.228 tests 1 1 1 0 0 00:06:24.228 asserts 13 13 13 0 n/a 00:06:24.228 00:06:24.228 Elapsed time = 0.000 seconds 00:06:24.228 00:06:24.228 real 0m0.025s 00:06:24.228 user 0m0.017s 00:06:24.228 sys 0m0.009s 00:06:24.228 04:46:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.228 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.228 ************************************ 00:06:24.228 END TEST unittest_notify 00:06:24.228 ************************************ 00:06:24.488 04:46:47 -- unit/unittest.sh@222 -- # run_test unittest_nvme unittest_nvme 00:06:24.488 04:46:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.488 04:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.488 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.488 ************************************ 00:06:24.488 START TEST unittest_nvme 00:06:24.488 ************************************ 00:06:24.488 04:46:47 -- common/autotest_common.sh@1114 -- # unittest_nvme 00:06:24.488 04:46:47 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:06:24.488 00:06:24.488 00:06:24.488 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.488 http://cunit.sourceforge.net/ 00:06:24.488 00:06:24.488 00:06:24.488 Suite: nvme 00:06:24.488 Test: test_opc_data_transfer ...passed 00:06:24.488 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:06:24.488 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:06:24.488 Test: test_trid_parse_and_compare ...[2024-11-18 04:46:47.780167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:06:24.488 passed 00:06:24.488 Test: test_trid_trtype_str ...passed 00:06:24.488 Test: test_trid_adrfam_str ...passed 00:06:24.488 Test: test_nvme_ctrlr_probe ...[2024-11-18 04:46:47.780820] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:24.488 [2024-11-18 04:46:47.780880] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:06:24.488 [2024-11-18 04:46:47.780906] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:24.488 [2024-11-18 04:46:47.780952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:06:24.488 [2024-11-18 04:46:47.780982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:24.488 passed 00:06:24.488 Test: test_spdk_nvme_probe ...passed 00:06:24.488 Test: test_spdk_nvme_connect ...[2024-11-18 04:46:47.781258] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:24.488 [2024-11-18 04:46:47.781345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:24.488 [2024-11-18 04:46:47.781380] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:24.488 [2024-11-18 04:46:47.781494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:06:24.488 [2024-11-18 04:46:47.781533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:24.488 [2024-11-18 04:46:47.781619] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:06:24.488 passed 00:06:24.488 Test: test_nvme_ctrlr_probe_internal ...passed 00:06:24.488 Test: test_nvme_init_controllers ...[2024-11-18 04:46:47.782063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:24.488 [2024-11-18 04:46:47.782103] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:06:24.488 [2024-11-18 04:46:47.782249] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:24.488 [2024-11-18 04:46:47.782285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:06:24.488 [2024-11-18 04:46:47.782387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:06:24.488 passed 00:06:24.488 Test: test_nvme_driver_init ...[2024-11-18 04:46:47.782492] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:06:24.488 [2024-11-18 04:46:47.782535] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:24.488 [2024-11-18 04:46:47.896845] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:06:24.488 [2024-11-18 04:46:47.896973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:06:24.488 passed 00:06:24.488 Test: test_spdk_nvme_detach ...passed 00:06:24.488 Test: test_nvme_completion_poll_cb ...passed 00:06:24.488 Test: test_nvme_user_copy_cmd_complete ...passed 00:06:24.488 Test: test_nvme_allocate_request_null ...passed 00:06:24.488 Test: test_nvme_allocate_request ...passed 00:06:24.488 Test: test_nvme_free_request ...passed 00:06:24.488 Test: test_nvme_allocate_request_user_copy ...passed 00:06:24.488 Test: test_nvme_robust_mutex_init_shared ...passed 00:06:24.488 Test: test_nvme_request_check_timeout ...passed 00:06:24.488 Test: test_nvme_wait_for_completion ...passed 00:06:24.488 Test: test_spdk_nvme_parse_func ...passed 00:06:24.488 Test: test_spdk_nvme_detach_async ...passed 00:06:24.488 Test: test_nvme_parse_addr ...passed 00:06:24.488 00:06:24.488 [2024-11-18 04:46:47.897999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:06:24.488 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.488 suites 1 1 n/a 0 0 00:06:24.488 tests 25 25 25 0 0 00:06:24.488 asserts 326 326 326 0 n/a 00:06:24.488 00:06:24.488 Elapsed time = 0.007 seconds 00:06:24.488 04:46:47 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:06:24.488 00:06:24.488 00:06:24.488 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.488 http://cunit.sourceforge.net/ 00:06:24.488 00:06:24.488 00:06:24.488 Suite: nvme_ctrlr 00:06:24.488 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-11-18 04:46:47.932275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.488 passed 00:06:24.488 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-11-18 04:46:47.933958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.488 passed 00:06:24.488 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-11-18 04:46:47.935386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.488 passed 00:06:24.488 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-11-18 04:46:47.936721] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.488 passed 00:06:24.488 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-11-18 04:46:47.938108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.488 [2024-11-18 04:46:47.939419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 04:46:47.940635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 04:46:47.941810] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:24.488 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-11-18 04:46:47.944349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.488 [2024-11-18 04:46:47.946752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 04:46:47.948052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:24.488 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-11-18 04:46:47.950735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.488 [2024-11-18 04:46:47.952065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 04:46:47.954517] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:24.488 Test: test_nvme_ctrlr_init_delay ...[2024-11-18 04:46:47.957282] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.488 passed 00:06:24.488 Test: test_alloc_io_qpair_rr_1 ...[2024-11-18 04:46:47.958739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.488 [2024-11-18 04:46:47.959028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:24.488 passed 00:06:24.488 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:06:24.488 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:06:24.489 Test: test_alloc_io_qpair_wrr_1 ...[2024-11-18 04:46:47.959144] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:24.489 [2024-11-18 04:46:47.959247] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:24.489 [2024-11-18 04:46:47.959308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:24.489 [2024-11-18 04:46:47.959497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.489 passed 00:06:24.489 Test: test_alloc_io_qpair_wrr_2 ...[2024-11-18 04:46:47.959725] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.489 [2024-11-18 04:46:47.959896] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:24.489 passed 00:06:24.489 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-11-18 04:46:47.960238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:06:24.489 passed 00:06:24.489 Test: test_nvme_ctrlr_fail ...passed 00:06:24.489 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...[2024-11-18 04:46:47.960341] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:24.489 [2024-11-18 04:46:47.960423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:06:24.489 [2024-11-18 04:46:47.960504] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:24.489 [2024-11-18 04:46:47.960572] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:06:24.489 passed 00:06:24.489 Test: test_nvme_ctrlr_set_supported_features ...passed 00:06:24.489 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:06:24.489 Test: test_nvme_ctrlr_test_active_ns ...[2024-11-18 04:46:47.960915] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:24.752 passed 00:06:24.752 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:06:24.752 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:06:24.752 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:06:24.752 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-11-18 04:46:48.266386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-11-18 04:46:48.273918] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-11-18 04:46:48.275189] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 [2024-11-18 04:46:48.275304] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:06:25.018 passed 00:06:25.018 Test: test_alloc_io_qpair_fail ...[2024-11-18 04:46:48.276582] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_add_remove_process ...passed 00:06:25.018 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:06:25.018 Test: test_nvme_ctrlr_set_state ...passed 00:06:25.018 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-11-18 04:46:48.276718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:06:25.018 [2024-11-18 04:46:48.276888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:06:25.018 [2024-11-18 04:46:48.276953] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-11-18 04:46:48.299559] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_ns_mgmt ...[2024-11-18 04:46:48.335362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_reset ...[2024-11-18 04:46:48.336923] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_aer_callback ...[2024-11-18 04:46:48.337259] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-11-18 04:46:48.338748] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:06:25.018 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:06:25.018 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-11-18 04:46:48.340486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:06:25.018 Test: test_nvme_ctrlr_ana_resize ...[2024-11-18 04:46:48.341946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:06:25.018 Test: test_nvme_transport_ctrlr_ready ...[2024-11-18 04:46:48.343568] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:06:25.018 passed 00:06:25.018 Test: test_nvme_ctrlr_disable ...[2024-11-18 04:46:48.343610] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:06:25.018 [2024-11-18 04:46:48.343669] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.018 passed 00:06:25.018 00:06:25.018 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.018 suites 1 1 n/a 0 0 00:06:25.018 tests 43 43 43 0 0 00:06:25.018 asserts 10418 10418 10418 0 n/a 00:06:25.018 00:06:25.018 Elapsed time = 0.371 seconds 00:06:25.018 04:46:48 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:06:25.018 00:06:25.018 00:06:25.018 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.018 http://cunit.sourceforge.net/ 00:06:25.018 00:06:25.018 00:06:25.018 Suite: nvme_ctrlr_cmd 00:06:25.018 Test: test_get_log_pages ...passed 00:06:25.018 Test: test_set_feature_cmd ...passed 00:06:25.018 Test: test_set_feature_ns_cmd ...passed 00:06:25.018 Test: test_get_feature_cmd ...passed 00:06:25.018 Test: test_get_feature_ns_cmd ...passed 00:06:25.018 Test: test_abort_cmd ...passed 00:06:25.018 Test: test_set_host_id_cmds ...[2024-11-18 04:46:48.389561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:06:25.018 passed 00:06:25.018 Test: test_io_cmd_raw_no_payload_build ...passed 00:06:25.018 Test: test_io_raw_cmd ...passed 00:06:25.018 Test: test_io_raw_cmd_with_md ...passed 00:06:25.018 Test: test_namespace_attach ...passed 00:06:25.018 Test: test_namespace_detach ...passed 00:06:25.018 Test: test_namespace_create ...passed 00:06:25.018 Test: test_namespace_delete ...passed 00:06:25.018 Test: test_doorbell_buffer_config ...passed 00:06:25.018 Test: test_format_nvme ...passed 00:06:25.018 Test: test_fw_commit ...passed 00:06:25.018 Test: test_fw_image_download ...passed 00:06:25.018 Test: test_sanitize ...passed 00:06:25.018 Test: test_directive ...passed 00:06:25.018 Test: test_nvme_request_add_abort ...passed 00:06:25.018 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:06:25.018 Test: test_nvme_ctrlr_cmd_identify ...passed 00:06:25.018 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:06:25.018 00:06:25.018 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.018 suites 1 1 n/a 0 0 00:06:25.018 tests 24 24 24 0 0 00:06:25.018 asserts 198 198 198 0 n/a 00:06:25.018 00:06:25.018 Elapsed time = 0.001 seconds 00:06:25.018 04:46:48 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:06:25.018 00:06:25.018 00:06:25.018 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.018 http://cunit.sourceforge.net/ 00:06:25.018 00:06:25.018 00:06:25.018 Suite: nvme_ctrlr_cmd 00:06:25.018 Test: test_geometry_cmd ...passed 00:06:25.018 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:06:25.018 00:06:25.018 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.018 suites 1 1 n/a 0 0 00:06:25.018 tests 2 2 2 0 0 00:06:25.018 asserts 7 7 7 0 n/a 00:06:25.018 00:06:25.018 Elapsed time = 0.000 seconds 00:06:25.018 04:46:48 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:06:25.018 00:06:25.018 00:06:25.018 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.018 http://cunit.sourceforge.net/ 00:06:25.018 00:06:25.018 00:06:25.018 Suite: nvme 00:06:25.018 Test: test_nvme_ns_construct ...passed 00:06:25.018 Test: test_nvme_ns_uuid ...passed 00:06:25.018 Test: test_nvme_ns_csi ...passed 00:06:25.018 Test: test_nvme_ns_data ...passed 00:06:25.018 Test: test_nvme_ns_set_identify_data ...passed 00:06:25.018 Test: test_spdk_nvme_ns_get_values ...passed 00:06:25.018 Test: test_spdk_nvme_ns_is_active ...passed 00:06:25.018 Test: spdk_nvme_ns_supports ...passed 00:06:25.018 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:06:25.018 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:06:25.018 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:06:25.019 Test: test_nvme_ns_find_id_desc ...passed 00:06:25.019 00:06:25.019 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.019 suites 1 1 n/a 0 0 00:06:25.019 tests 12 12 12 0 0 00:06:25.019 asserts 83 83 83 0 n/a 00:06:25.019 00:06:25.019 Elapsed time = 0.001 seconds 00:06:25.019 04:46:48 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:06:25.019 00:06:25.019 00:06:25.019 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.019 http://cunit.sourceforge.net/ 00:06:25.019 00:06:25.019 00:06:25.019 Suite: nvme_ns_cmd 00:06:25.019 Test: split_test ...passed 00:06:25.019 Test: split_test2 ...passed 00:06:25.019 Test: split_test3 ...passed 00:06:25.019 Test: split_test4 ...passed 00:06:25.019 Test: test_nvme_ns_cmd_flush ...passed 00:06:25.019 Test: test_nvme_ns_cmd_dataset_management ...passed 00:06:25.019 Test: test_nvme_ns_cmd_copy ...passed 00:06:25.019 Test: test_io_flags ...passed 00:06:25.019 Test: test_nvme_ns_cmd_write_zeroes ...[2024-11-18 04:46:48.482713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:06:25.019 passed 00:06:25.019 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:06:25.019 Test: test_nvme_ns_cmd_reservation_register ...passed 00:06:25.019 Test: test_nvme_ns_cmd_reservation_release ...passed 00:06:25.019 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:06:25.019 Test: test_nvme_ns_cmd_reservation_report ...passed 00:06:25.019 Test: test_cmd_child_request ...passed 00:06:25.019 Test: test_nvme_ns_cmd_readv ...passed 00:06:25.019 Test: test_nvme_ns_cmd_read_with_md ...passed 00:06:25.019 Test: test_nvme_ns_cmd_writev ...passed 00:06:25.019 Test: test_nvme_ns_cmd_write_with_md ...[2024-11-18 04:46:48.483968] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:06:25.019 passed 00:06:25.019 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:06:25.019 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:06:25.019 Test: test_nvme_ns_cmd_comparev ...passed 00:06:25.019 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:06:25.019 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:06:25.019 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:06:25.019 Test: test_nvme_ns_cmd_setup_request ...passed 00:06:25.019 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:06:25.019 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:06:25.019 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:06:25.019 Test: test_nvme_ns_cmd_verify ...passed 00:06:25.019 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:06:25.019 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:06:25.019 00:06:25.019 [2024-11-18 04:46:48.485763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:25.019 [2024-11-18 04:46:48.485899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:25.019 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.019 suites 1 1 n/a 0 0 00:06:25.019 tests 32 32 32 0 0 00:06:25.019 asserts 550 550 550 0 n/a 00:06:25.019 00:06:25.019 Elapsed time = 0.005 seconds 00:06:25.019 04:46:48 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:06:25.019 00:06:25.019 00:06:25.019 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.019 http://cunit.sourceforge.net/ 00:06:25.019 00:06:25.019 00:06:25.019 Suite: nvme_ns_cmd 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:06:25.019 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:06:25.019 00:06:25.019 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.019 suites 1 1 n/a 0 0 00:06:25.019 tests 12 12 12 0 0 00:06:25.019 asserts 123 123 123 0 n/a 00:06:25.019 00:06:25.019 Elapsed time = 0.001 seconds 00:06:25.019 04:46:48 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:06:25.279 00:06:25.279 00:06:25.279 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.279 http://cunit.sourceforge.net/ 00:06:25.279 00:06:25.279 00:06:25.279 Suite: nvme_qpair 00:06:25.279 Test: test3 ...passed 00:06:25.279 Test: test_ctrlr_failed ...passed 00:06:25.279 Test: struct_packing ...passed 00:06:25.279 Test: test_nvme_qpair_process_completions ...[2024-11-18 04:46:48.551257] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:25.279 [2024-11-18 04:46:48.551546] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:25.279 [2024-11-18 04:46:48.551633] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:25.279 [2024-11-18 04:46:48.551684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:25.279 passed 00:06:25.279 Test: test_nvme_completion_is_retry ...passed 00:06:25.279 Test: test_get_status_string ...passed 00:06:25.279 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:06:25.279 Test: test_nvme_qpair_submit_request ...passed 00:06:25.279 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:06:25.279 Test: test_nvme_qpair_manual_complete_request ...passed 00:06:25.279 Test: test_nvme_qpair_init_deinit ...passed 00:06:25.279 Test: test_nvme_get_sgl_print_info ...passed 00:06:25.279 00:06:25.279 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.279 suites 1 1 n/a 0 0 00:06:25.279 tests 12 12 12 0 0 00:06:25.279 asserts 154 154 154 0 n/a 00:06:25.279 00:06:25.279 Elapsed time = 0.002 seconds 00:06:25.279 [2024-11-18 04:46:48.552277] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:25.279 04:46:48 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:06:25.279 00:06:25.279 00:06:25.279 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.279 http://cunit.sourceforge.net/ 00:06:25.279 00:06:25.279 00:06:25.279 Suite: nvme_pcie 00:06:25.279 Test: test_prp_list_append ...[2024-11-18 04:46:48.582027] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:25.279 [2024-11-18 04:46:48.582241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:06:25.279 [2024-11-18 04:46:48.582285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:06:25.279 passed 00:06:25.279 Test: test_nvme_pcie_hotplug_monitor ...[2024-11-18 04:46:48.582454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:25.279 [2024-11-18 04:46:48.582555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:25.279 passed 00:06:25.279 Test: test_shadow_doorbell_update ...passed 00:06:25.279 Test: test_build_contig_hw_sgl_request ...passed 00:06:25.279 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:06:25.279 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:06:25.279 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:06:25.279 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:06:25.279 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:06:25.279 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:06:25.279 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-11-18 04:46:48.582790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:25.279 passed 00:06:25.279 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:06:25.279 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:06:25.279 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:06:25.279 00:06:25.279 [2024-11-18 04:46:48.582950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:06:25.279 [2024-11-18 04:46:48.583005] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:06:25.279 [2024-11-18 04:46:48.583055] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:06:25.279 [2024-11-18 04:46:48.583104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:06:25.279 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.279 suites 1 1 n/a 0 0 00:06:25.279 tests 14 14 14 0 0 00:06:25.279 asserts 235 235 235 0 n/a 00:06:25.279 00:06:25.279 Elapsed time = 0.001 seconds 00:06:25.279 04:46:48 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:06:25.279 00:06:25.280 00:06:25.280 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.280 http://cunit.sourceforge.net/ 00:06:25.280 00:06:25.280 00:06:25.280 Suite: nvme_ns_cmd 00:06:25.280 Test: nvme_poll_group_create_test ...passed 00:06:25.280 Test: nvme_poll_group_add_remove_test ...passed 00:06:25.280 Test: nvme_poll_group_process_completions ...passed 00:06:25.280 Test: nvme_poll_group_destroy_test ...passed 00:06:25.280 Test: nvme_poll_group_get_free_stats ...passed 00:06:25.280 00:06:25.280 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.280 suites 1 1 n/a 0 0 00:06:25.280 tests 5 5 5 0 0 00:06:25.280 asserts 75 75 75 0 n/a 00:06:25.280 00:06:25.280 Elapsed time = 0.001 seconds 00:06:25.280 04:46:48 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:06:25.280 00:06:25.280 00:06:25.280 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.280 http://cunit.sourceforge.net/ 00:06:25.280 00:06:25.280 00:06:25.280 Suite: nvme_quirks 00:06:25.280 Test: test_nvme_quirks_striping ...passed 00:06:25.280 00:06:25.280 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.280 suites 1 1 n/a 0 0 00:06:25.280 tests 1 1 1 0 0 00:06:25.280 asserts 5 5 5 0 n/a 00:06:25.280 00:06:25.280 Elapsed time = 0.000 seconds 00:06:25.280 04:46:48 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:06:25.280 00:06:25.280 00:06:25.280 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.280 http://cunit.sourceforge.net/ 00:06:25.280 00:06:25.280 00:06:25.280 Suite: nvme_tcp 00:06:25.280 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:06:25.280 Test: test_nvme_tcp_build_iovs ...passed 00:06:25.280 Test: test_nvme_tcp_build_sgl_request ...passed 00:06:25.280 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:06:25.280 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:06:25.280 Test: test_nvme_tcp_req_complete_safe ...passed[2024-11-18 04:46:48.661909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7c3363e0d2e0, and the iovcnt=16, remaining_size=28672 00:06:25.280 00:06:25.280 Test: test_nvme_tcp_req_get ...passed 00:06:25.280 Test: test_nvme_tcp_req_init ...passed 00:06:25.280 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:06:25.280 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:06:25.280 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:06:25.280 Test: test_nvme_tcp_alloc_reqs ...[2024-11-18 04:46:48.662536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363909030 is same with the state(6) to be set 00:06:25.280 passed 00:06:25.280 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:06:25.280 Test: test_nvme_tcp_pdu_ch_handle ...[2024-11-18 04:46:48.662890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363d09070 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.662985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7c3363c0a6e0 00:06:25.280 [2024-11-18 04:46:48.663030] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:06:25.280 [2024-11-18 04:46:48.663065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363c0a070 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.663108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:06:25.280 [2024-11-18 04:46:48.663129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363c0a070 is same with the state(5) to be set 00:06:25.280 passed 00:06:25.280 Test: test_nvme_tcp_qpair_connect_sock ...[2024-11-18 04:46:48.663161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:06:25.280 [2024-11-18 04:46:48.663213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363c0a070 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.663257] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363c0a070 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.663299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363c0a070 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.663354] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363c0a070 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.663401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363c0a070 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.663438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363c0a070 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.663639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:06:25.280 [2024-11-18 04:46:48.663681] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:25.280 passed 00:06:25.280 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:06:25.280 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:06:25.280 Test: test_nvme_tcp_icresp_handle ...passed 00:06:25.280 Test: test_nvme_tcp_pdu_payload_handle ...[2024-11-18 04:46:48.664034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:25.280 [2024-11-18 04:46:48.664125] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7c3363c0b540): PDU Sequence Error 00:06:25.280 [2024-11-18 04:46:48.664202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:06:25.280 [2024-11-18 04:46:48.664237] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:06:25.280 [2024-11-18 04:46:48.664274] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363d0d070 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.664304] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:06:25.280 [2024-11-18 04:46:48.664337] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363d0d070 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.664364] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363d0d070 is same with the state(0) to be set 00:06:25.280 passed 00:06:25.280 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:06:25.280 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:06:25.280 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-11-18 04:46:48.664426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7c3363c0c540): PDU Sequence Error 00:06:25.280 [2024-11-18 04:46:48.664512] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7c3363d0f200 00:06:25.280 passed 00:06:25.280 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-11-18 04:46:48.664714] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7c3363e25480, errno=0, rc=0 00:06:25.280 [2024-11-18 04:46:48.664772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363e25480 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.664814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c3363e25480 is same with the state(5) to be set 00:06:25.280 [2024-11-18 04:46:48.664877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c3363e25480 (0): Success 00:06:25.280 [2024-11-18 04:46:48.664927] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c3363e25480 (0): Success 00:06:25.280 passed 00:06:25.280 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...[2024-11-18 04:46:48.772219] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:25.280 [2024-11-18 04:46:48.772333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:25.280 passed 00:06:25.280 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:06:25.280 Test: test_nvme_tcp_ctrlr_construct ...[2024-11-18 04:46:48.772698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:25.280 [2024-11-18 04:46:48.772735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:25.280 [2024-11-18 04:46:48.772946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:25.280 [2024-11-18 04:46:48.772984] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:25.280 [2024-11-18 04:46:48.773080] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:06:25.280 [2024-11-18 04:46:48.773159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:25.280 passed 00:06:25.280 Test: test_nvme_tcp_qpair_submit_request ...[2024-11-18 04:46:48.773306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513000001540 with addr=192.168.1.78, port=23 00:06:25.280 [2024-11-18 04:46:48.773360] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:25.280 [2024-11-18 04:46:48.773529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x513000001a80, and the iovcnt=1, remaining_size=1024 00:06:25.280 passed 00:06:25.280 00:06:25.280 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.280 suites 1 1 n/a 0 0 00:06:25.280 tests 27 27 27 0 0 00:06:25.280 asserts 624 624 624 0 n/a 00:06:25.280 00:06:25.280 Elapsed time = 0.112 seconds 00:06:25.280 [2024-11-18 04:46:48.773588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:06:25.280 04:46:48 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:06:25.540 00:06:25.540 00:06:25.540 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.540 http://cunit.sourceforge.net/ 00:06:25.540 00:06:25.540 00:06:25.540 Suite: nvme_transport 00:06:25.540 Test: test_nvme_get_transport ...passed 00:06:25.540 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:06:25.540 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:06:25.540 Test: test_nvme_transport_poll_group_add_remove ...passed 00:06:25.540 Test: test_ctrlr_get_memory_domains ...passed 00:06:25.540 00:06:25.540 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.540 suites 1 1 n/a 0 0 00:06:25.540 tests 5 5 5 0 0 00:06:25.540 asserts 28 28 28 0 n/a 00:06:25.540 00:06:25.540 Elapsed time = 0.000 seconds 00:06:25.540 04:46:48 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:06:25.540 00:06:25.540 00:06:25.540 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.540 http://cunit.sourceforge.net/ 00:06:25.540 00:06:25.540 00:06:25.540 Suite: nvme_io_msg 00:06:25.540 Test: test_nvme_io_msg_send ...passed 00:06:25.540 Test: test_nvme_io_msg_process ...passed 00:06:25.540 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:06:25.540 00:06:25.540 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.540 suites 1 1 n/a 0 0 00:06:25.540 tests 3 3 3 0 0 00:06:25.540 asserts 56 56 56 0 n/a 00:06:25.540 00:06:25.540 Elapsed time = 0.000 seconds 00:06:25.540 04:46:48 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:06:25.540 00:06:25.540 00:06:25.540 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.540 http://cunit.sourceforge.net/ 00:06:25.540 00:06:25.540 00:06:25.540 Suite: nvme_pcie_common 00:06:25.540 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:06:25.540 Test: test_nvme_pcie_qpair_construct_destroy ...[2024-11-18 04:46:48.875678] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:06:25.540 passed 00:06:25.540 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:06:25.540 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-11-18 04:46:48.876418] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:06:25.540 [2024-11-18 04:46:48.876487] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:06:25.540 passed 00:06:25.540 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:06:25.540 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:06:25.540 00:06:25.540 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.540 suites 1 1 n/a 0 0 00:06:25.540 tests 6 6 6 0 0 00:06:25.540 asserts 148 148 148 0 n/a 00:06:25.540 00:06:25.540 Elapsed time = 0.001 seconds 00:06:25.540 [2024-11-18 04:46:48.876537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:06:25.540 [2024-11-18 04:46:48.876951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:25.540 [2024-11-18 04:46:48.876988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:25.540 04:46:48 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:06:25.540 00:06:25.540 00:06:25.540 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.540 http://cunit.sourceforge.net/ 00:06:25.540 00:06:25.540 00:06:25.540 Suite: nvme_fabric 00:06:25.540 Test: test_nvme_fabric_prop_set_cmd ...passed 00:06:25.540 Test: test_nvme_fabric_prop_get_cmd ...passed 00:06:25.540 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:06:25.540 Test: test_nvme_fabric_discover_probe ...passed 00:06:25.540 Test: test_nvme_fabric_qpair_connect ...passed 00:06:25.540 00:06:25.540 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.540 suites 1 1 n/a 0 0 00:06:25.540 tests 5 5 5 0 0 00:06:25.540 asserts 60 60 60 0 n/a 00:06:25.540 00:06:25.540 Elapsed time = 0.001 seconds 00:06:25.541 [2024-11-18 04:46:48.903478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:06:25.541 04:46:48 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:06:25.541 00:06:25.541 00:06:25.541 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.541 http://cunit.sourceforge.net/ 00:06:25.541 00:06:25.541 00:06:25.541 Suite: nvme_opal 00:06:25.541 Test: test_opal_nvme_security_recv_send_done ...passed 00:06:25.541 Test: test_opal_add_short_atom_header ...passed 00:06:25.541 00:06:25.541 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.541 suites 1 1 n/a 0 0 00:06:25.541 tests 2 2 2 0 0 00:06:25.541 asserts 22 22 22 0 n/a 00:06:25.541 00:06:25.541 Elapsed time = 0.000 seconds 00:06:25.541 [2024-11-18 04:46:48.935046] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:06:25.541 00:06:25.541 real 0m1.185s 00:06:25.541 user 0m0.574s 00:06:25.541 sys 0m0.464s 00:06:25.541 04:46:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.541 04:46:48 -- common/autotest_common.sh@10 -- # set +x 00:06:25.541 ************************************ 00:06:25.541 END TEST unittest_nvme 00:06:25.541 ************************************ 00:06:25.541 04:46:48 -- unit/unittest.sh@223 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:25.541 04:46:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.541 04:46:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.541 04:46:48 -- common/autotest_common.sh@10 -- # set +x 00:06:25.541 ************************************ 00:06:25.541 START TEST unittest_log 00:06:25.541 ************************************ 00:06:25.541 04:46:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:25.541 00:06:25.541 00:06:25.541 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.541 http://cunit.sourceforge.net/ 00:06:25.541 00:06:25.541 00:06:25.541 Suite: log 00:06:25.541 Test: log_test ...passed 00:06:25.541 Test: deprecation ...[2024-11-18 04:46:49.011487] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:06:25.541 [2024-11-18 04:46:49.011684] log_ut.c: 55:log_test: *DEBUG*: log test 00:06:25.541 log dump test: 00:06:25.541 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:06:25.541 spdk dump test: 00:06:25.541 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:06:25.541 spdk dump test: 00:06:25.541 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:06:25.541 00000010 65 20 63 68 61 72 73 e chars 00:06:26.490 passed 00:06:26.490 00:06:26.751 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.751 suites 1 1 n/a 0 0 00:06:26.751 tests 2 2 2 0 0 00:06:26.751 asserts 73 73 73 0 n/a 00:06:26.751 00:06:26.751 Elapsed time = 0.001 seconds 00:06:26.751 00:06:26.751 real 0m1.032s 00:06:26.751 user 0m0.013s 00:06:26.751 sys 0m0.019s 00:06:26.751 04:46:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.751 ************************************ 00:06:26.751 END TEST unittest_log 00:06:26.751 ************************************ 00:06:26.751 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:26.751 04:46:50 -- unit/unittest.sh@224 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:26.751 04:46:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.751 04:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.751 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:26.751 ************************************ 00:06:26.751 START TEST unittest_lvol 00:06:26.751 ************************************ 00:06:26.751 04:46:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:26.751 00:06:26.751 00:06:26.751 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.751 http://cunit.sourceforge.net/ 00:06:26.751 00:06:26.751 00:06:26.751 Suite: lvol 00:06:26.751 Test: lvs_init_unload_success ...[2024-11-18 04:46:50.104061] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:06:26.751 passed 00:06:26.751 Test: lvs_init_destroy_success ...passed 00:06:26.751 Test: lvs_init_opts_success ...passed 00:06:26.751 Test: lvs_unload_lvs_is_null_fail ...[2024-11-18 04:46:50.104556] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:06:26.751 [2024-11-18 04:46:50.104783] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:06:26.751 passed 00:06:26.751 Test: lvs_names ...[2024-11-18 04:46:50.104844] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:06:26.751 [2024-11-18 04:46:50.104889] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:06:26.751 [2024-11-18 04:46:50.105044] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:06:26.751 passed 00:06:26.751 Test: lvol_create_destroy_success ...passed 00:06:26.751 Test: lvol_create_fail ...passed 00:06:26.751 Test: lvol_destroy_fail ...[2024-11-18 04:46:50.105595] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:06:26.751 [2024-11-18 04:46:50.105698] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:06:26.751 passed 00:06:26.751 Test: lvol_close ...[2024-11-18 04:46:50.105990] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:06:26.751 passed 00:06:26.751 Test: lvol_resize ...[2024-11-18 04:46:50.106171] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:06:26.751 [2024-11-18 04:46:50.106240] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:06:26.751 passed 00:06:26.751 Test: lvol_set_read_only ...passed 00:06:26.751 Test: test_lvs_load ...passed 00:06:26.751 Test: lvols_load ...[2024-11-18 04:46:50.106854] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:06:26.751 [2024-11-18 04:46:50.106903] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:06:26.751 [2024-11-18 04:46:50.107057] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:26.751 passed 00:06:26.751 Test: lvol_open ...[2024-11-18 04:46:50.107147] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:26.751 passed 00:06:26.751 Test: lvol_snapshot ...passed 00:06:26.751 Test: lvol_snapshot_fail ...[2024-11-18 04:46:50.107791] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:06:26.751 passed 00:06:26.751 Test: lvol_clone ...passed 00:06:26.751 Test: lvol_clone_fail ...passed 00:06:26.751 Test: lvol_iter_clones ...[2024-11-18 04:46:50.108227] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:06:26.751 passed 00:06:26.751 Test: lvol_refcnt ...passed 00:06:26.751 Test: lvol_names ...[2024-11-18 04:46:50.108648] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 97d4cc3a-180d-40c3-935d-a888d2a9e197 because it is still open 00:06:26.751 [2024-11-18 04:46:50.108811] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:26.751 [2024-11-18 04:46:50.108880] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:26.751 passed 00:06:26.751 Test: lvol_create_thin_provisioned ...[2024-11-18 04:46:50.109051] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:06:26.751 passed 00:06:26.751 Test: lvol_rename ...[2024-11-18 04:46:50.109411] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:26.751 [2024-11-18 04:46:50.109502] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:06:26.751 passed 00:06:26.751 Test: lvs_rename ...passed 00:06:26.751 Test: lvol_inflate ...[2024-11-18 04:46:50.109677] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:06:26.751 passed 00:06:26.751 Test: lvol_decouple_parent ...passed 00:06:26.751 Test: lvol_get_xattr ...[2024-11-18 04:46:50.109836] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:26.751 [2024-11-18 04:46:50.110006] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:26.751 passed 00:06:26.751 Test: lvol_esnap_reload ...passed 00:06:26.751 Test: lvol_esnap_create_bad_args ...[2024-11-18 04:46:50.110391] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:06:26.751 [2024-11-18 04:46:50.110437] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:26.751 [2024-11-18 04:46:50.110468] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:06:26.751 [2024-11-18 04:46:50.110521] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:26.751 [2024-11-18 04:46:50.110642] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:06:26.751 passed 00:06:26.751 Test: lvol_esnap_create_delete ...passed 00:06:26.751 Test: lvol_esnap_load_esnaps ...[2024-11-18 04:46:50.110926] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:06:26.751 passed 00:06:26.751 Test: lvol_esnap_missing ...[2024-11-18 04:46:50.111097] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:26.751 [2024-11-18 04:46:50.111141] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:26.751 passed 00:06:26.751 Test: lvol_esnap_hotplug ... 00:06:26.751 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:06:26.751 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:06:26.751 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:06:26.751 [2024-11-18 04:46:50.111638] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 1e1fc92a-5ec0-4fb7-8dc5-1db7b2747de0: failed to create esnap bs_dev: error -12 00:06:26.751 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:06:26.751 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:06:26.751 [2024-11-18 04:46:50.111825] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 071ccfb3-0fe4-4f22-a746-9ff9da5e7377: failed to create esnap bs_dev: error -12 00:06:26.751 [2024-11-18 04:46:50.111926] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol c70450d4-1ed6-41fd-82ab-9ec21065dc7c: failed to create esnap bs_dev: error -12 00:06:26.751 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:06:26.751 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:06:26.751 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:06:26.751 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:06:26.751 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:06:26.751 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:06:26.751 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:06:26.751 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:06:26.751 passed 00:06:26.751 Test: lvol_get_by ...passed 00:06:26.751 00:06:26.751 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.751 suites 1 1 n/a 0 0 00:06:26.751 tests 34 34 34 0 0 00:06:26.751 asserts 1439 1439 1439 0 n/a 00:06:26.751 00:06:26.751 Elapsed time = 0.009 seconds 00:06:26.751 00:06:26.751 real 0m0.047s 00:06:26.751 user 0m0.026s 00:06:26.751 sys 0m0.022s 00:06:26.751 04:46:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.751 ************************************ 00:06:26.751 END TEST unittest_lvol 00:06:26.751 ************************************ 00:06:26.752 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:26.752 04:46:50 -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:26.752 04:46:50 -- unit/unittest.sh@226 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:26.752 04:46:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.752 04:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.752 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:26.752 ************************************ 00:06:26.752 START TEST unittest_nvme_rdma 00:06:26.752 ************************************ 00:06:26.752 04:46:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:26.752 00:06:26.752 00:06:26.752 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.752 http://cunit.sourceforge.net/ 00:06:26.752 00:06:26.752 00:06:26.752 Suite: nvme_rdma 00:06:26.752 Test: test_nvme_rdma_build_sgl_request ...passed 00:06:26.752 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:06:26.752 Test: test_nvme_rdma_build_contig_request ...[2024-11-18 04:46:50.197109] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:06:26.752 [2024-11-18 04:46:50.197352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:26.752 [2024-11-18 04:46:50.197401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:06:26.752 [2024-11-18 04:46:50.197492] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:26.752 passed 00:06:26.752 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:06:26.752 Test: test_nvme_rdma_create_reqs ...[2024-11-18 04:46:50.197614] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:06:26.752 passed 00:06:26.752 Test: test_nvme_rdma_create_rsps ...passed 00:06:26.752 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:06:26.752 Test: test_nvme_rdma_poller_create ...[2024-11-18 04:46:50.197970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:06:26.752 [2024-11-18 04:46:50.198174] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:26.752 [2024-11-18 04:46:50.198222] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:26.752 passed 00:06:26.752 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:06:26.752 Test: test_nvme_rdma_ctrlr_construct ...passed 00:06:26.752 Test: test_nvme_rdma_req_put_and_get ...passed 00:06:26.752 Test: test_nvme_rdma_req_init ...passed 00:06:26.752 Test: test_nvme_rdma_validate_cm_event ...[2024-11-18 04:46:50.198380] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:06:26.752 passed 00:06:26.752 Test: test_nvme_rdma_qpair_init ...passed 00:06:26.752 Test: test_nvme_rdma_qpair_submit_request ...passed 00:06:26.752 Test: test_nvme_rdma_memory_domain ...[2024-11-18 04:46:50.198699] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:06:26.752 [2024-11-18 04:46:50.198741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:06:26.752 [2024-11-18 04:46:50.198931] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:06:26.752 passed 00:06:26.752 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:06:26.752 Test: test_rdma_get_memory_translation ...passed 00:06:26.752 Test: test_get_rdma_qpair_from_wc ...passed 00:06:26.752 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:06:26.752 Test: test_nvme_rdma_poll_group_get_stats ...[2024-11-18 04:46:50.199021] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:06:26.752 [2024-11-18 04:46:50.199054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:06:26.752 [2024-11-18 04:46:50.199161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:26.752 [2024-11-18 04:46:50.199214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:26.752 passed 00:06:26.752 Test: test_nvme_rdma_qpair_set_poller ...[2024-11-18 04:46:50.199356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:26.752 [2024-11-18 04:46:50.199401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:06:26.752 [2024-11-18 04:46:50.199429] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fe84320a030 on poll group 0x50b000000040 00:06:26.752 [2024-11-18 04:46:50.199475] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:26.752 [2024-11-18 04:46:50.199511] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:06:26.752 [2024-11-18 04:46:50.199537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fe84320a030 on poll group 0x50b000000040 00:06:26.752 passed 00:06:26.752 00:06:26.752 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.752 suites 1 1 n/a 0 0 00:06:26.752 tests 22 22 22 0 0 00:06:26.752 asserts 412 412 412 0 n/a 00:06:26.752 00:06:26.752 Elapsed time = 0.003 seconds 00:06:26.752 [2024-11-18 04:46:50.199608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:26.752 00:06:26.752 real 0m0.032s 00:06:26.752 user 0m0.013s 00:06:26.752 sys 0m0.019s 00:06:26.752 ************************************ 00:06:26.752 04:46:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.752 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:26.752 END TEST unittest_nvme_rdma 00:06:26.752 ************************************ 00:06:26.752 04:46:50 -- unit/unittest.sh@227 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:26.752 04:46:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.752 04:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.752 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:26.752 ************************************ 00:06:26.752 START TEST unittest_nvmf_transport 00:06:26.752 ************************************ 00:06:26.752 04:46:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:27.012 00:06:27.012 00:06:27.012 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.012 http://cunit.sourceforge.net/ 00:06:27.012 00:06:27.012 00:06:27.012 Suite: nvmf 00:06:27.012 Test: test_spdk_nvmf_transport_create ...[2024-11-18 04:46:50.284312] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:06:27.012 [2024-11-18 04:46:50.284561] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:06:27.012 [2024-11-18 04:46:50.284621] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:06:27.012 passed 00:06:27.012 Test: test_nvmf_transport_poll_group_create ...[2024-11-18 04:46:50.284696] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:06:27.012 passed 00:06:27.012 Test: test_spdk_nvmf_transport_opts_init ...passed 00:06:27.012 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:06:27.012 00:06:27.012 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.012 suites 1 1 n/a 0 0 00:06:27.012 tests 4 4 4 0 0 00:06:27.012 asserts 49 49 49 0 n/a 00:06:27.012 00:06:27.012 Elapsed time = 0.001 seconds 00:06:27.013 [2024-11-18 04:46:50.284981] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:06:27.013 [2024-11-18 04:46:50.285023] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:06:27.013 [2024-11-18 04:46:50.285057] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:06:27.013 00:06:27.013 real 0m0.037s 00:06:27.013 user 0m0.015s 00:06:27.013 sys 0m0.022s 00:06:27.013 04:46:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.013 ************************************ 00:06:27.013 END TEST unittest_nvmf_transport 00:06:27.013 ************************************ 00:06:27.013 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.013 04:46:50 -- unit/unittest.sh@228 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:27.013 04:46:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.013 04:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.013 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.013 ************************************ 00:06:27.013 START TEST unittest_rdma 00:06:27.013 ************************************ 00:06:27.013 04:46:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:27.013 00:06:27.013 00:06:27.013 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.013 http://cunit.sourceforge.net/ 00:06:27.013 00:06:27.013 00:06:27.013 Suite: rdma_common 00:06:27.013 Test: test_spdk_rdma_pd ...passed 00:06:27.013 00:06:27.013 [2024-11-18 04:46:50.375555] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:27.013 [2024-11-18 04:46:50.375896] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:27.013 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.013 suites 1 1 n/a 0 0 00:06:27.013 tests 1 1 1 0 0 00:06:27.013 asserts 31 31 31 0 n/a 00:06:27.013 00:06:27.013 Elapsed time = 0.001 seconds 00:06:27.013 00:06:27.013 real 0m0.033s 00:06:27.013 user 0m0.021s 00:06:27.013 sys 0m0.012s 00:06:27.013 04:46:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.013 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.013 ************************************ 00:06:27.013 END TEST unittest_rdma 00:06:27.013 ************************************ 00:06:27.013 04:46:50 -- unit/unittest.sh@231 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:27.013 04:46:50 -- unit/unittest.sh@232 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:27.013 04:46:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.013 04:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.013 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.013 ************************************ 00:06:27.013 START TEST unittest_nvme_cuse 00:06:27.013 ************************************ 00:06:27.013 04:46:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:27.013 00:06:27.013 00:06:27.013 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.013 http://cunit.sourceforge.net/ 00:06:27.013 00:06:27.013 00:06:27.013 Suite: nvme_cuse 00:06:27.013 Test: test_cuse_nvme_submit_io_read_write ...passed 00:06:27.013 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:06:27.013 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:06:27.013 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:06:27.013 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:06:27.013 Test: test_cuse_nvme_submit_io ...[2024-11-18 04:46:50.470067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:06:27.013 passed 00:06:27.013 Test: test_cuse_nvme_reset ...passed 00:06:27.013 Test: test_nvme_cuse_stop ...passed 00:06:27.013 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:06:27.013 00:06:27.013 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.013 suites 1 1 n/a 0 0 00:06:27.013 tests 9 9 9 0 0 00:06:27.013 asserts 121 121 121 0 n/a 00:06:27.013 00:06:27.013 Elapsed time = 0.002 seconds 00:06:27.013 [2024-11-18 04:46:50.470343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:06:27.013 00:06:27.013 real 0m0.033s 00:06:27.013 user 0m0.014s 00:06:27.013 sys 0m0.019s 00:06:27.013 04:46:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.013 ************************************ 00:06:27.013 END TEST unittest_nvme_cuse 00:06:27.013 ************************************ 00:06:27.013 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.013 04:46:50 -- unit/unittest.sh@235 -- # run_test unittest_nvmf unittest_nvmf 00:06:27.013 04:46:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.013 04:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.013 04:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.274 ************************************ 00:06:27.274 START TEST unittest_nvmf 00:06:27.274 ************************************ 00:06:27.274 04:46:50 -- common/autotest_common.sh@1114 -- # unittest_nvmf 00:06:27.274 04:46:50 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:06:27.274 00:06:27.274 00:06:27.274 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.274 http://cunit.sourceforge.net/ 00:06:27.274 00:06:27.274 00:06:27.274 Suite: nvmf 00:06:27.274 Test: test_get_log_page ...passed 00:06:27.274 Test: test_process_fabrics_cmd ...passed 00:06:27.274 Test: test_connect ...[2024-11-18 04:46:50.563222] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:27.274 [2024-11-18 04:46:50.564104] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:06:27.274 [2024-11-18 04:46:50.564166] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:06:27.274 [2024-11-18 04:46:50.564235] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:06:27.274 [2024-11-18 04:46:50.564264] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:06:27.274 [2024-11-18 04:46:50.564310] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:06:27.274 [2024-11-18 04:46:50.564352] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:06:27.274 [2024-11-18 04:46:50.564402] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:06:27.274 [2024-11-18 04:46:50.564437] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:06:27.274 [2024-11-18 04:46:50.564548] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:06:27.274 [2024-11-18 04:46:50.564627] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:06:27.274 [2024-11-18 04:46:50.564899] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:06:27.274 [2024-11-18 04:46:50.564972] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:06:27.274 [2024-11-18 04:46:50.565068] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:06:27.274 [2024-11-18 04:46:50.565142] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:06:27.274 [2024-11-18 04:46:50.565281] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:06:27.274 [2024-11-18 04:46:50.565448] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:06:27.274 passed 00:06:27.274 Test: test_get_ns_id_desc_list ...passed 00:06:27.274 Test: test_identify_ns ...[2024-11-18 04:46:50.565742] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:27.274 passed 00:06:27.274 Test: test_identify_ns_iocs_specific ...[2024-11-18 04:46:50.565961] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:06:27.274 [2024-11-18 04:46:50.566116] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:06:27.274 [2024-11-18 04:46:50.566292] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:27.274 [2024-11-18 04:46:50.566582] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:27.274 passed 00:06:27.274 Test: test_reservation_write_exclusive ...passed 00:06:27.274 Test: test_reservation_exclusive_access ...passed 00:06:27.274 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:06:27.274 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:06:27.274 Test: test_reservation_notification_log_page ...passed 00:06:27.274 Test: test_get_dif_ctx ...passed 00:06:27.274 Test: test_set_get_features ...[2024-11-18 04:46:50.567140] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:27.274 [2024-11-18 04:46:50.567209] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:27.274 [2024-11-18 04:46:50.567244] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEpassed 00:06:27.274 Test: test_identify_ctrlr ...passed 00:06:27.274 Test: test_identify_ctrlr_iocs_specific ...L 3 00:06:27.274 [2024-11-18 04:46:50.567274] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:06:27.274 passed 00:06:27.274 Test: test_custom_admin_cmd ...passed 00:06:27.274 Test: test_fused_compare_and_write ...passed 00:06:27.274 Test: test_multi_async_event_reqs ...passed 00:06:27.274 Test: test_get_ana_log_page_one_ns_per_anagrp ...[2024-11-18 04:46:50.567839] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:06:27.274 [2024-11-18 04:46:50.567916] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:27.274 [2024-11-18 04:46:50.567962] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:27.274 passed 00:06:27.274 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:06:27.274 Test: test_multi_async_events ...passed 00:06:27.274 Test: test_rae ...passed 00:06:27.274 Test: test_nvmf_ctrlr_create_destruct ...passed 00:06:27.274 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:06:27.274 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:06:27.274 Test: test_zcopy_read ...[2024-11-18 04:46:50.568571] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:06:27.274 passed 00:06:27.274 Test: test_zcopy_write ...passed 00:06:27.274 Test: test_nvmf_property_set ...passed 00:06:27.274 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:06:27.274 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-11-18 04:46:50.568777] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:27.275 [2024-11-18 04:46:50.568827] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:27.275 passed 00:06:27.275 00:06:27.275 [2024-11-18 04:46:50.568871] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:06:27.275 [2024-11-18 04:46:50.568919] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:06:27.275 [2024-11-18 04:46:50.568956] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:06:27.275 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.275 suites 1 1 n/a 0 0 00:06:27.275 tests 30 30 30 0 0 00:06:27.275 asserts 885 885 885 0 n/a 00:06:27.275 00:06:27.275 Elapsed time = 0.006 seconds 00:06:27.275 04:46:50 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:06:27.275 00:06:27.275 00:06:27.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.275 http://cunit.sourceforge.net/ 00:06:27.275 00:06:27.275 00:06:27.275 Suite: nvmf 00:06:27.275 Test: test_get_rw_params ...passed 00:06:27.275 Test: test_lba_in_range ...passed 00:06:27.275 Test: test_get_dif_ctx ...passed 00:06:27.275 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:06:27.275 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:06:27.275 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-11-18 04:46:50.601951] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:06:27.275 [2024-11-18 04:46:50.602290] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:06:27.275 [2024-11-18 04:46:50.602352] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:06:27.275 [2024-11-18 04:46:50.602410] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:06:27.275 passed 00:06:27.275 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:06:27.275 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed[2024-11-18 04:46:50.602441] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:06:27.275 [2024-11-18 04:46:50.602496] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:06:27.275 [2024-11-18 04:46:50.602534] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:06:27.275 [2024-11-18 04:46:50.602572] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:06:27.275 [2024-11-18 04:46:50.602623] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:06:27.275 00:06:27.275 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:06:27.275 00:06:27.275 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.275 suites 1 1 n/a 0 0 00:06:27.275 tests 9 9 9 0 0 00:06:27.275 asserts 157 157 157 0 n/a 00:06:27.275 00:06:27.275 Elapsed time = 0.001 seconds 00:06:27.275 04:46:50 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:06:27.275 00:06:27.275 00:06:27.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.275 http://cunit.sourceforge.net/ 00:06:27.275 00:06:27.275 00:06:27.275 Suite: nvmf 00:06:27.275 Test: test_discovery_log ...passed 00:06:27.275 Test: test_discovery_log_with_filters ...passed 00:06:27.275 00:06:27.275 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.275 suites 1 1 n/a 0 0 00:06:27.275 tests 2 2 2 0 0 00:06:27.275 asserts 238 238 238 0 n/a 00:06:27.275 00:06:27.275 Elapsed time = 0.003 seconds 00:06:27.275 04:46:50 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:06:27.275 00:06:27.275 00:06:27.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.275 http://cunit.sourceforge.net/ 00:06:27.275 00:06:27.275 00:06:27.275 Suite: nvmf 00:06:27.275 Test: nvmf_test_create_subsystem ...[2024-11-18 04:46:50.689027] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:06:27.275 [2024-11-18 04:46:50.689524] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:06:27.275 [2024-11-18 04:46:50.689609] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:06:27.275 [2024-11-18 04:46:50.689666] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:06:27.275 [2024-11-18 04:46:50.689746] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:06:27.275 [2024-11-18 04:46:50.689781] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:06:27.275 [2024-11-18 04:46:50.689971] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:06:27.275 [2024-11-18 04:46:50.690139] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:06:27.275 [2024-11-18 04:46:50.690320] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:06:27.275 [2024-11-18 04:46:50.690376] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:27.275 passed 00:06:27.275 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-11-18 04:46:50.690429] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:27.275 [2024-11-18 04:46:50.690815] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Reqpassed 00:06:27.275 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:06:27.275 Test: test_reservation_register ...uested NSID 5 already in use 00:06:27.275 [2024-11-18 04:46:50.690891] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:06:27.275 passed 00:06:27.275 Test: test_reservation_register_with_ptpl ...[2024-11-18 04:46:50.691266] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:27.275 [2024-11-18 04:46:50.691452] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:06:27.275 passed 00:06:27.275 Test: test_reservation_acquire_preempt_1 ...passed 00:06:27.275 Test: test_reservation_acquire_release_with_ptpl ...[2024-11-18 04:46:50.693291] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:27.275 passed 00:06:27.275 Test: test_reservation_release ...passed 00:06:27.275 Test: test_reservation_unregister_notification ...[2024-11-18 04:46:50.696281] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:27.275 passed 00:06:27.275 Test: test_reservation_release_notification ...[2024-11-18 04:46:50.696613] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:27.275 [2024-11-18 04:46:50.696980] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:27.275 passed 00:06:27.275 Test: test_reservation_release_notification_write_exclusive ...passed 00:06:27.275 Test: test_reservation_clear_notification ...[2024-11-18 04:46:50.697280] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:27.275 passed 00:06:27.275 Test: test_reservation_preempt_notification ...[2024-11-18 04:46:50.697593] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:27.275 passed 00:06:27.275 Test: test_spdk_nvmf_ns_event ...[2024-11-18 04:46:50.697935] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:27.275 passed 00:06:27.275 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:06:27.275 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:06:27.275 Test: test_spdk_nvmf_subsystem_add_host ...[2024-11-18 04:46:50.699118] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:06:27.275 [2024-11-18 04:46:50.699279] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:06:27.275 passed 00:06:27.275 Test: test_nvmf_ns_reservation_report ...[2024-11-18 04:46:50.699501] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:06:27.275 passed 00:06:27.275 Test: test_nvmf_nqn_is_valid ...[2024-11-18 04:46:50.699600] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:06:27.275 [2024-11-18 04:46:50.699651] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:811ff695-b760-45a0-a4b2-162d6748865": uuid is not the correct length 00:06:27.275 [2024-11-18 04:46:50.699690] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:06:27.275 passed 00:06:27.275 Test: test_nvmf_ns_reservation_restore ...[2024-11-18 04:46:50.699817] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:06:27.276 passed 00:06:27.276 Test: test_nvmf_subsystem_state_change ...passed 00:06:27.276 Test: test_nvmf_reservation_custom_ops ...passed 00:06:27.276 00:06:27.276 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.276 suites 1 1 n/a 0 0 00:06:27.276 tests 22 22 22 0 0 00:06:27.276 asserts 407 407 407 0 n/a 00:06:27.276 00:06:27.276 Elapsed time = 0.012 seconds 00:06:27.276 04:46:50 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:06:27.276 00:06:27.276 00:06:27.276 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.276 http://cunit.sourceforge.net/ 00:06:27.276 00:06:27.276 00:06:27.276 Suite: nvmf 00:06:27.276 Test: test_nvmf_tcp_create ...[2024-11-18 04:46:50.767487] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 732:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:06:27.276 passed 00:06:27.538 Test: test_nvmf_tcp_destroy ...passed 00:06:27.538 Test: test_nvmf_tcp_poll_group_create ...passed 00:06:27.538 Test: test_nvmf_tcp_send_c2h_data ...passed 00:06:27.538 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:06:27.538 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:06:27.538 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:06:27.538 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-11-18 04:46:50.885413] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.538 passed 00:06:27.538 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:06:27.538 Test: test_nvmf_tcp_icreq_handle ...[2024-11-18 04:46:50.885525] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab990b020 is same with the state(5) to be set 00:06:27.538 [2024-11-18 04:46:50.885569] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab990b020 is same with the state(5) to be set 00:06:27.538 [2024-11-18 04:46:50.885608] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.538 [2024-11-18 04:46:50.885638] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab990b020 is same with the state(5) to be set 00:06:27.538 [2024-11-18 04:46:50.885783] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:27.538 [2024-11-18 04:46:50.885837] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.538 [2024-11-18 04:46:50.885881] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab990d180 is same with the state(5) to be set 00:06:27.538 [2024-11-18 04:46:50.885912] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:27.538 [2024-11-18 04:46:50.885931] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab990d180 is same with the state(5) to be set 00:06:27.538 [2024-11-18 04:46:50.885963] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.538 [2024-11-18 04:46:50.885996] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab990d180 is same with the state(5) to be set 00:06:27.538 [2024-11-18 04:46:50.886033] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:06:27.538 passed 00:06:27.538 Test: test_nvmf_tcp_check_xfer_type ...passed 00:06:27.538 Test: test_nvmf_tcp_invalid_sgl ...passed 00:06:27.539 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-11-18 04:46:50.886077] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab990d180 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.886210] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2486:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:06:27.539 [2024-11-18 04:46:50.886250] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 [2024-11-18 04:46:50.886284] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab99116a0 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.886335] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2218:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x75bab980c8c0 00:06:27.539 [2024-11-18 04:46:50.886382] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 [2024-11-18 04:46:50.886417] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab980c020 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.886451] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x75bab980c020 00:06:27.539 [2024-11-18 04:46:50.886497] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 [2024-11-18 04:46:50.886527] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab980c020 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.886566] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2228:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:06:27.539 [2024-11-18 04:46:50.886618] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 [2024-11-18 04:46:50.886664] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab980c020 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.886696] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2267:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:06:27.539 [2024-11-18 04:46:50.886731] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 [2024-11-18 04:46:50.886761] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab980c020 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.886802] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 [2024-11-18 04:46:50.886839] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab980c020 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.886888] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 [2024-11-18 04:46:50.886913] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab980c020 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.886962] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 [2024-11-18 04:46:50.886994] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab980c020 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.887033] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 [2024-11-18 04:46:50.887065] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab980c020 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.887105] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 [2024-11-18 04:46:50.887129] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab980c020 is same with the state(5) to be set 00:06:27.539 [2024-11-18 04:46:50.887177] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:27.539 passed 00:06:27.539 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-11-18 04:46:50.887225] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75bab980c020 is same with the state(5) to be set 00:06:27.539 passed 00:06:27.539 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-11-18 04:46:50.926001] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:06:27.539 passed 00:06:27.539 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-11-18 04:46:50.926106] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:06:27.539 [2024-11-18 04:46:50.927484] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:06:27.539 passed 00:06:27.539 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-11-18 04:46:50.927567] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:06:27.539 [2024-11-18 04:46:50.928338] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:06:27.539 [2024-11-18 04:46:50.928383] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:06:27.539 passed 00:06:27.539 00:06:27.539 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.539 suites 1 1 n/a 0 0 00:06:27.539 tests 17 17 17 0 0 00:06:27.539 asserts 222 222 222 0 n/a 00:06:27.539 00:06:27.539 Elapsed time = 0.181 seconds 00:06:27.539 04:46:50 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:06:27.539 00:06:27.539 00:06:27.539 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.539 http://cunit.sourceforge.net/ 00:06:27.539 00:06:27.539 00:06:27.539 Suite: nvmf 00:06:27.539 Test: test_nvmf_tgt_create_poll_group ...passed 00:06:27.539 00:06:27.539 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.539 suites 1 1 n/a 0 0 00:06:27.539 tests 1 1 1 0 0 00:06:27.539 asserts 17 17 17 0 n/a 00:06:27.539 00:06:27.539 Elapsed time = 0.027 seconds 00:06:27.799 00:06:27.799 real 0m0.558s 00:06:27.799 user 0m0.244s 00:06:27.799 sys 0m0.310s 00:06:27.799 04:46:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.799 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.799 ************************************ 00:06:27.799 END TEST unittest_nvmf 00:06:27.799 ************************************ 00:06:27.799 04:46:51 -- unit/unittest.sh@236 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:27.799 04:46:51 -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:27.799 04:46:51 -- unit/unittest.sh@242 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:27.799 04:46:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.799 04:46:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.799 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.799 ************************************ 00:06:27.799 START TEST unittest_nvmf_rdma 00:06:27.799 ************************************ 00:06:27.799 04:46:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:27.799 00:06:27.799 00:06:27.799 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.799 http://cunit.sourceforge.net/ 00:06:27.799 00:06:27.799 00:06:27.799 Suite: nvmf 00:06:27.799 Test: test_spdk_nvmf_rdma_request_parse_sgl ...passed 00:06:27.799 Test: test_spdk_nvmf_rdma_request_process ...[2024-11-18 04:46:51.184909] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:06:27.799 [2024-11-18 04:46:51.185177] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:06:27.799 [2024-11-18 04:46:51.185252] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:06:27.799 passed 00:06:27.799 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:06:27.799 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:06:27.799 Test: test_nvmf_rdma_opts_init ...passed 00:06:27.799 Test: test_nvmf_rdma_request_free_data ...passed 00:06:27.799 Test: test_nvmf_rdma_update_ibv_state ...passed 00:06:27.799 Test: test_nvmf_rdma_resources_create ...[2024-11-18 04:46:51.186847] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 616:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:06:27.799 [2024-11-18 04:46:51.186913] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 627:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:06:27.799 passed 00:06:27.799 Test: test_nvmf_rdma_qpair_compare ...passed 00:06:27.799 Test: test_nvmf_rdma_resize_cq ...passed 00:06:27.799 00:06:27.799 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.799 suites 1 1 n/a 0 0 00:06:27.799 tests 10 10 10 0 0 00:06:27.799 asserts 584 584 584 0 n/a 00:06:27.799 00:06:27.799 Elapsed time = 0.004 seconds 00:06:27.799 [2024-11-18 04:46:51.188426] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1008:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:06:27.799 Using CQ of insufficient size may lead to CQ overrun 00:06:27.799 [2024-11-18 04:46:51.188479] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1013:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:06:27.799 [2024-11-18 04:46:51.188559] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1021:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:27.799 00:06:27.799 real 0m0.042s 00:06:27.799 user 0m0.024s 00:06:27.799 sys 0m0.018s 00:06:27.799 ************************************ 00:06:27.799 04:46:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.799 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.799 END TEST unittest_nvmf_rdma 00:06:27.799 ************************************ 00:06:27.799 04:46:51 -- unit/unittest.sh@245 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:27.799 04:46:51 -- unit/unittest.sh@249 -- # run_test unittest_scsi unittest_scsi 00:06:27.799 04:46:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.799 04:46:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.799 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.799 ************************************ 00:06:27.799 START TEST unittest_scsi 00:06:27.799 ************************************ 00:06:27.799 04:46:51 -- common/autotest_common.sh@1114 -- # unittest_scsi 00:06:27.799 04:46:51 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:06:27.799 00:06:27.799 00:06:27.799 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.799 http://cunit.sourceforge.net/ 00:06:27.799 00:06:27.799 00:06:27.799 Suite: dev_suite 00:06:27.799 Test: dev_destruct_null_dev ...passed 00:06:27.799 Test: dev_destruct_zero_luns ...passed 00:06:27.799 Test: dev_destruct_null_lun ...passed 00:06:27.799 Test: dev_destruct_success ...passed 00:06:27.799 Test: dev_construct_num_luns_zero ...passed 00:06:27.800 Test: dev_construct_no_lun_zero ...passed 00:06:27.800 Test: dev_construct_null_lun ...[2024-11-18 04:46:51.281275] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:06:27.800 [2024-11-18 04:46:51.281488] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:06:27.800 passed 00:06:27.800 Test: dev_construct_name_too_long ...[2024-11-18 04:46:51.281534] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:06:27.800 passed 00:06:27.800 Test: dev_construct_success ...passed 00:06:27.800 Test: dev_construct_success_lun_zero_not_first ...passed 00:06:27.800 Test: dev_queue_mgmt_task_success ...[2024-11-18 04:46:51.281579] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:06:27.800 passed 00:06:27.800 Test: dev_queue_task_success ...passed 00:06:27.800 Test: dev_stop_success ...passed 00:06:27.800 Test: dev_add_port_max_ports ...passed 00:06:27.800 Test: dev_add_port_construct_failure1 ...passed 00:06:27.800 Test: dev_add_port_construct_failure2 ...passed 00:06:27.800 Test: dev_add_port_success1 ...passed 00:06:27.800 Test: dev_add_port_success2 ...passed 00:06:27.800 Test: dev_add_port_success3 ...passed 00:06:27.800 Test: dev_find_port_by_id_num_ports_zero ...passed[2024-11-18 04:46:51.281849] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:06:27.800 [2024-11-18 04:46:51.281893] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:06:27.800 [2024-11-18 04:46:51.281931] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:06:27.800 00:06:27.800 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:06:27.800 Test: dev_find_port_by_id_success ...passed 00:06:27.800 Test: dev_add_lun_bdev_not_found ...passed 00:06:27.800 Test: dev_add_lun_no_free_lun_id ...passed 00:06:27.800 Test: dev_add_lun_success1 ...passed 00:06:27.800 Test: dev_add_lun_success2 ...passed 00:06:27.800 Test: dev_check_pending_tasks ...passed 00:06:27.800 Test: dev_iterate_luns ...passed 00:06:27.800 Test: dev_find_free_lun ...[2024-11-18 04:46:51.282311] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:06:27.800 passed 00:06:27.800 00:06:27.800 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.800 suites 1 1 n/a 0 0 00:06:27.800 tests 29 29 29 0 0 00:06:27.800 asserts 97 97 97 0 n/a 00:06:27.800 00:06:27.800 Elapsed time = 0.002 seconds 00:06:27.800 04:46:51 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:06:27.800 00:06:27.800 00:06:27.800 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.800 http://cunit.sourceforge.net/ 00:06:27.800 00:06:27.800 00:06:27.800 Suite: lun_suite 00:06:27.800 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:06:27.800 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:06:27.800 Test: lun_task_mgmt_execute_lun_reset ...passed 00:06:27.800 Test: lun_task_mgmt_execute_target_reset ...passed 00:06:27.800 Test: lun_task_mgmt_execute_invalid_case ...[2024-11-18 04:46:51.317844] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:06:27.800 [2024-11-18 04:46:51.318090] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:06:27.800 [2024-11-18 04:46:51.318248] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:06:27.800 passed 00:06:27.800 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:06:27.800 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:06:27.800 Test: lun_append_task_null_lun_not_supported ...passed 00:06:27.800 Test: lun_execute_scsi_task_pending ...passed 00:06:27.800 Test: lun_execute_scsi_task_complete ...passed 00:06:27.800 Test: lun_execute_scsi_task_resize ...passed 00:06:27.800 Test: lun_destruct_success ...passed 00:06:27.800 Test: lun_construct_null_ctx ...passed 00:06:27.800 Test: lun_construct_success ...passed 00:06:27.800 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:06:27.800 Test: lun_reset_task_suspend_scsi_task ...passed 00:06:27.800 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:06:27.800 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...[2024-11-18 04:46:51.318444] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:06:27.800 passed 00:06:27.800 00:06:27.800 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.800 suites 1 1 n/a 0 0 00:06:27.800 tests 18 18 18 0 0 00:06:27.800 asserts 153 153 153 0 n/a 00:06:27.800 00:06:27.800 Elapsed time = 0.001 seconds 00:06:28.060 04:46:51 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:06:28.060 00:06:28.060 00:06:28.060 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.060 http://cunit.sourceforge.net/ 00:06:28.060 00:06:28.060 00:06:28.060 Suite: scsi_suite 00:06:28.060 Test: scsi_init ...passed 00:06:28.060 00:06:28.060 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.060 suites 1 1 n/a 0 0 00:06:28.060 tests 1 1 1 0 0 00:06:28.060 asserts 1 1 1 0 n/a 00:06:28.060 00:06:28.060 Elapsed time = 0.000 seconds 00:06:28.060 04:46:51 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:06:28.060 00:06:28.060 00:06:28.060 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.060 http://cunit.sourceforge.net/ 00:06:28.060 00:06:28.060 00:06:28.060 Suite: translation_suite 00:06:28.060 Test: mode_select_6_test ...passed 00:06:28.060 Test: mode_select_6_test2 ...passed 00:06:28.060 Test: mode_sense_6_test ...passed 00:06:28.060 Test: mode_sense_10_test ...passed 00:06:28.060 Test: inquiry_evpd_test ...passed 00:06:28.060 Test: inquiry_standard_test ...passed 00:06:28.060 Test: inquiry_overflow_test ...passed 00:06:28.060 Test: task_complete_test ...passed 00:06:28.060 Test: lba_range_test ...passed 00:06:28.060 Test: xfer_len_test ...passed 00:06:28.060 Test: xfer_test ...[2024-11-18 04:46:51.388659] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:06:28.060 passed 00:06:28.060 Test: scsi_name_padding_test ...passed 00:06:28.060 Test: get_dif_ctx_test ...passed 00:06:28.060 Test: unmap_split_test ...passed 00:06:28.060 00:06:28.060 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.060 suites 1 1 n/a 0 0 00:06:28.060 tests 14 14 14 0 0 00:06:28.060 asserts 1200 1200 1200 0 n/a 00:06:28.060 00:06:28.060 Elapsed time = 0.005 seconds 00:06:28.060 04:46:51 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:06:28.060 00:06:28.060 00:06:28.060 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.060 http://cunit.sourceforge.net/ 00:06:28.060 00:06:28.060 00:06:28.060 Suite: reservation_suite 00:06:28.060 Test: test_reservation_register ...passed 00:06:28.060 Test: test_reservation_reserve ...passed 00:06:28.060 Test: test_reservation_preempt_non_all_regs ...[2024-11-18 04:46:51.423039] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:28.060 [2024-11-18 04:46:51.423305] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:28.060 [2024-11-18 04:46:51.423377] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:06:28.060 [2024-11-18 04:46:51.423415] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:06:28.060 [2024-11-18 04:46:51.423478] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:28.060 passed 00:06:28.060 Test: test_reservation_preempt_all_regs ...passed 00:06:28.060 Test: test_reservation_cmds_conflict ...passed 00:06:28.060 Test: test_scsi2_reserve_release ...passed 00:06:28.060 Test: test_pr_with_scsi2_reserve_release ...[2024-11-18 04:46:51.423548] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:06:28.060 [2024-11-18 04:46:51.423628] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:28.060 [2024-11-18 04:46:51.423736] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:28.060 [2024-11-18 04:46:51.423794] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:06:28.060 [2024-11-18 04:46:51.423827] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:28.060 [2024-11-18 04:46:51.423865] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:28.060 [2024-11-18 04:46:51.423895] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:28.060 [2024-11-18 04:46:51.423929] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:28.060 [2024-11-18 04:46:51.423993] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:28.060 passed 00:06:28.060 00:06:28.060 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.060 suites 1 1 n/a 0 0 00:06:28.060 tests 7 7 7 0 0 00:06:28.060 asserts 257 257 257 0 n/a 00:06:28.060 00:06:28.060 Elapsed time = 0.001 seconds 00:06:28.060 00:06:28.060 real 0m0.170s 00:06:28.060 user 0m0.076s 00:06:28.060 sys 0m0.095s 00:06:28.060 04:46:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.060 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.060 ************************************ 00:06:28.060 END TEST unittest_scsi 00:06:28.060 ************************************ 00:06:28.060 04:46:51 -- unit/unittest.sh@252 -- # uname -s 00:06:28.060 04:46:51 -- unit/unittest.sh@252 -- # '[' Linux = Linux ']' 00:06:28.060 04:46:51 -- unit/unittest.sh@253 -- # run_test unittest_sock unittest_sock 00:06:28.060 04:46:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.060 04:46:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.060 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.060 ************************************ 00:06:28.060 START TEST unittest_sock 00:06:28.060 ************************************ 00:06:28.060 04:46:51 -- common/autotest_common.sh@1114 -- # unittest_sock 00:06:28.060 04:46:51 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:06:28.060 00:06:28.060 00:06:28.060 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.060 http://cunit.sourceforge.net/ 00:06:28.060 00:06:28.060 00:06:28.060 Suite: sock 00:06:28.060 Test: posix_sock ...passed 00:06:28.060 Test: ut_sock ...passed 00:06:28.060 Test: posix_sock_group ...passed 00:06:28.060 Test: ut_sock_group ...passed 00:06:28.060 Test: posix_sock_group_fairness ...passed 00:06:28.061 Test: _posix_sock_close ...passed 00:06:28.061 Test: sock_get_default_opts ...passed 00:06:28.061 Test: ut_sock_impl_get_set_opts ...passed 00:06:28.061 Test: posix_sock_impl_get_set_opts ...passed 00:06:28.061 Test: ut_sock_map ...passed 00:06:28.061 Test: override_impl_opts ...passed 00:06:28.061 Test: ut_sock_group_get_ctx ...passed 00:06:28.061 00:06:28.061 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.061 suites 1 1 n/a 0 0 00:06:28.061 tests 12 12 12 0 0 00:06:28.061 asserts 349 349 349 0 n/a 00:06:28.061 00:06:28.061 Elapsed time = 0.009 seconds 00:06:28.061 04:46:51 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:06:28.326 00:06:28.326 00:06:28.326 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.326 http://cunit.sourceforge.net/ 00:06:28.326 00:06:28.326 00:06:28.326 Suite: posix 00:06:28.326 Test: flush ...passed 00:06:28.326 00:06:28.326 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.326 suites 1 1 n/a 0 0 00:06:28.326 tests 1 1 1 0 0 00:06:28.326 asserts 28 28 28 0 n/a 00:06:28.326 00:06:28.326 Elapsed time = 0.000 seconds 00:06:28.326 04:46:51 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:28.326 00:06:28.326 real 0m0.106s 00:06:28.326 user 0m0.036s 00:06:28.326 sys 0m0.048s 00:06:28.326 04:46:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.326 ************************************ 00:06:28.326 END TEST unittest_sock 00:06:28.326 ************************************ 00:06:28.326 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.326 04:46:51 -- unit/unittest.sh@255 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:28.326 04:46:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.326 04:46:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.326 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.326 ************************************ 00:06:28.326 START TEST unittest_thread 00:06:28.326 ************************************ 00:06:28.326 04:46:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:28.326 00:06:28.326 00:06:28.326 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.326 http://cunit.sourceforge.net/ 00:06:28.326 00:06:28.326 00:06:28.326 Suite: io_channel 00:06:28.327 Test: thread_alloc ...passed 00:06:28.327 Test: thread_send_msg ...passed 00:06:28.327 Test: thread_poller ...passed 00:06:28.327 Test: poller_pause ...passed 00:06:28.327 Test: thread_for_each ...passed 00:06:28.327 Test: for_each_channel_remove ...passed 00:06:28.327 Test: for_each_channel_unreg ...[2024-11-18 04:46:51.701824] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x792f95509640 already registered (old:0x513000000200 new:0x5130000003c0) 00:06:28.327 passed 00:06:28.327 Test: thread_name ...passed 00:06:28.327 Test: channel ...[2024-11-18 04:46:51.706607] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2299:spdk_get_io_channel: *ERROR*: could not find io_device 0x5f7597973120 00:06:28.327 passed 00:06:28.327 Test: channel_destroy_races ...passed 00:06:28.327 Test: thread_exit_test ...[2024-11-18 04:46:51.712434] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 631:thread_exit: *ERROR*: thread 0x518000005c80 got timeout, and move it to the exited state forcefully 00:06:28.327 passed 00:06:28.327 Test: thread_update_stats_test ...passed 00:06:28.327 Test: nested_channel ...passed 00:06:28.327 Test: device_unregister_and_thread_exit_race ...passed 00:06:28.327 Test: cache_closest_timed_poller ...passed 00:06:28.327 Test: multi_timed_pollers_have_same_expiration ...passed 00:06:28.327 Test: io_device_lookup ...passed 00:06:28.327 Test: spdk_spin ...[2024-11-18 04:46:51.724952] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3063:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:28.327 [2024-11-18 04:46:51.725021] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x792f9550a020 00:06:28.327 [2024-11-18 04:46:51.725077] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3101:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:28.327 [2024-11-18 04:46:51.727163] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:28.327 [2024-11-18 04:46:51.727233] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x792f9550a020 00:06:28.327 [2024-11-18 04:46:51.727269] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:28.327 [2024-11-18 04:46:51.727305] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x792f9550a020 00:06:28.327 [2024-11-18 04:46:51.727346] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:28.327 [2024-11-18 04:46:51.727392] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x792f9550a020 00:06:28.327 [2024-11-18 04:46:51.727409] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3045:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:06:28.327 [2024-11-18 04:46:51.727444] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x792f9550a020 00:06:28.327 passed 00:06:28.327 Test: for_each_channel_and_thread_exit_race ...passed 00:06:28.327 Test: for_each_thread_and_thread_exit_race ...passed 00:06:28.327 00:06:28.327 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.327 suites 1 1 n/a 0 0 00:06:28.327 tests 20 20 20 0 0 00:06:28.327 asserts 409 409 409 0 n/a 00:06:28.327 00:06:28.327 Elapsed time = 0.060 seconds 00:06:28.327 00:06:28.327 real 0m0.103s 00:06:28.327 user 0m0.068s 00:06:28.327 sys 0m0.035s 00:06:28.327 04:46:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.327 ************************************ 00:06:28.327 END TEST unittest_thread 00:06:28.327 ************************************ 00:06:28.327 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.327 04:46:51 -- unit/unittest.sh@256 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:28.327 04:46:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.327 04:46:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.327 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.327 ************************************ 00:06:28.327 START TEST unittest_iobuf 00:06:28.327 ************************************ 00:06:28.327 04:46:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:28.327 00:06:28.327 00:06:28.327 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.327 http://cunit.sourceforge.net/ 00:06:28.327 00:06:28.327 00:06:28.327 Suite: io_channel 00:06:28.327 Test: iobuf ...passed 00:06:28.327 Test: iobuf_cache ...[2024-11-18 04:46:51.842865] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:28.327 [2024-11-18 04:46:51.843154] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:28.327 [2024-11-18 04:46:51.843329] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:06:28.327 [2024-11-18 04:46:51.843366] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:28.327 [2024-11-18 04:46:51.843439] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:28.327 [2024-11-18 04:46:51.843477] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:28.613 passed 00:06:28.613 00:06:28.613 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.613 suites 1 1 n/a 0 0 00:06:28.613 tests 2 2 2 0 0 00:06:28.613 asserts 107 107 107 0 n/a 00:06:28.613 00:06:28.613 Elapsed time = 0.007 seconds 00:06:28.613 00:06:28.613 real 0m0.043s 00:06:28.613 user 0m0.025s 00:06:28.613 sys 0m0.018s 00:06:28.613 04:46:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.613 ************************************ 00:06:28.613 END TEST unittest_iobuf 00:06:28.613 ************************************ 00:06:28.613 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.613 04:46:51 -- unit/unittest.sh@257 -- # run_test unittest_util unittest_util 00:06:28.613 04:46:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.613 04:46:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.613 04:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.613 ************************************ 00:06:28.613 START TEST unittest_util 00:06:28.613 ************************************ 00:06:28.613 04:46:51 -- common/autotest_common.sh@1114 -- # unittest_util 00:06:28.613 04:46:51 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:06:28.613 00:06:28.613 00:06:28.613 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.613 http://cunit.sourceforge.net/ 00:06:28.613 00:06:28.613 00:06:28.613 Suite: base64 00:06:28.613 Test: test_base64_get_encoded_strlen ...passed 00:06:28.613 Test: test_base64_get_decoded_len ...passed 00:06:28.613 Test: test_base64_encode ...passed 00:06:28.613 Test: test_base64_decode ...passed 00:06:28.613 Test: test_base64_urlsafe_encode ...passed 00:06:28.613 Test: test_base64_urlsafe_decode ...passed 00:06:28.613 00:06:28.613 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.613 suites 1 1 n/a 0 0 00:06:28.613 tests 6 6 6 0 0 00:06:28.613 asserts 112 112 112 0 n/a 00:06:28.613 00:06:28.613 Elapsed time = 0.000 seconds 00:06:28.613 04:46:51 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:06:28.613 00:06:28.613 00:06:28.613 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.613 http://cunit.sourceforge.net/ 00:06:28.613 00:06:28.613 00:06:28.613 Suite: bit_array 00:06:28.613 Test: test_1bit ...passed 00:06:28.613 Test: test_64bit ...passed 00:06:28.613 Test: test_find ...passed 00:06:28.613 Test: test_resize ...passed 00:06:28.613 Test: test_errors ...passed 00:06:28.613 Test: test_count ...passed 00:06:28.613 Test: test_mask_store_load ...passed 00:06:28.613 Test: test_mask_clear ...passed 00:06:28.613 00:06:28.613 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.613 suites 1 1 n/a 0 0 00:06:28.613 tests 8 8 8 0 0 00:06:28.613 asserts 5075 5075 5075 0 n/a 00:06:28.613 00:06:28.613 Elapsed time = 0.002 seconds 00:06:28.613 04:46:51 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:06:28.613 00:06:28.613 00:06:28.613 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.613 http://cunit.sourceforge.net/ 00:06:28.613 00:06:28.613 00:06:28.613 Suite: cpuset 00:06:28.613 Test: test_cpuset ...passed 00:06:28.613 Test: test_cpuset_parse ...[2024-11-18 04:46:51.985967] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:06:28.613 passed 00:06:28.613 Test: test_cpuset_fmt ...[2024-11-18 04:46:51.986229] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:06:28.613 [2024-11-18 04:46:51.986281] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:06:28.613 [2024-11-18 04:46:51.986319] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:06:28.613 [2024-11-18 04:46:51.986359] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:06:28.613 [2024-11-18 04:46:51.986392] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:06:28.613 [2024-11-18 04:46:51.986415] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:06:28.613 [2024-11-18 04:46:51.986448] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:06:28.613 passed 00:06:28.613 00:06:28.613 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.613 suites 1 1 n/a 0 0 00:06:28.613 tests 3 3 3 0 0 00:06:28.613 asserts 65 65 65 0 n/a 00:06:28.613 00:06:28.614 Elapsed time = 0.002 seconds 00:06:28.614 04:46:52 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:06:28.614 00:06:28.614 00:06:28.614 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.614 http://cunit.sourceforge.net/ 00:06:28.614 00:06:28.614 00:06:28.614 Suite: crc16 00:06:28.614 Test: test_crc16_t10dif ...passed 00:06:28.614 Test: test_crc16_t10dif_seed ...passed 00:06:28.614 Test: test_crc16_t10dif_copy ...passed 00:06:28.614 00:06:28.614 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.614 suites 1 1 n/a 0 0 00:06:28.614 tests 3 3 3 0 0 00:06:28.614 asserts 5 5 5 0 n/a 00:06:28.614 00:06:28.614 Elapsed time = 0.000 seconds 00:06:28.614 04:46:52 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:06:28.614 00:06:28.614 00:06:28.614 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.614 http://cunit.sourceforge.net/ 00:06:28.614 00:06:28.614 00:06:28.614 Suite: crc32_ieee 00:06:28.614 Test: test_crc32_ieee ...passed 00:06:28.614 00:06:28.614 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.614 suites 1 1 n/a 0 0 00:06:28.614 tests 1 1 1 0 0 00:06:28.614 asserts 1 1 1 0 n/a 00:06:28.614 00:06:28.614 Elapsed time = 0.000 seconds 00:06:28.614 04:46:52 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:06:28.614 00:06:28.614 00:06:28.614 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.614 http://cunit.sourceforge.net/ 00:06:28.614 00:06:28.614 00:06:28.614 Suite: crc32c 00:06:28.614 Test: test_crc32c ...passed 00:06:28.614 Test: test_crc32c_nvme ...passed 00:06:28.614 00:06:28.614 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.614 suites 1 1 n/a 0 0 00:06:28.614 tests 2 2 2 0 0 00:06:28.614 asserts 16 16 16 0 n/a 00:06:28.614 00:06:28.614 Elapsed time = 0.000 seconds 00:06:28.614 04:46:52 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:06:28.614 00:06:28.614 00:06:28.614 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.614 http://cunit.sourceforge.net/ 00:06:28.614 00:06:28.614 00:06:28.614 Suite: crc64 00:06:28.614 Test: test_crc64_nvme ...passed 00:06:28.614 00:06:28.614 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.614 suites 1 1 n/a 0 0 00:06:28.614 tests 1 1 1 0 0 00:06:28.614 asserts 4 4 4 0 n/a 00:06:28.614 00:06:28.614 Elapsed time = 0.000 seconds 00:06:28.614 04:46:52 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:06:28.614 00:06:28.614 00:06:28.614 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.614 http://cunit.sourceforge.net/ 00:06:28.614 00:06:28.614 00:06:28.614 Suite: string 00:06:28.614 Test: test_parse_ip_addr ...passed 00:06:28.614 Test: test_str_chomp ...passed 00:06:28.614 Test: test_parse_capacity ...passed 00:06:28.614 Test: test_sprintf_append_realloc ...passed 00:06:28.614 Test: test_strtol ...passed 00:06:28.614 Test: test_strtoll ...passed 00:06:28.614 Test: test_strarray ...passed 00:06:28.614 Test: test_strcpy_replace ...passed 00:06:28.614 00:06:28.614 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.614 suites 1 1 n/a 0 0 00:06:28.614 tests 8 8 8 0 0 00:06:28.614 asserts 161 161 161 0 n/a 00:06:28.614 00:06:28.614 Elapsed time = 0.001 seconds 00:06:28.614 04:46:52 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:06:28.877 00:06:28.877 00:06:28.877 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.877 http://cunit.sourceforge.net/ 00:06:28.877 00:06:28.877 00:06:28.877 Suite: dif 00:06:28.877 Test: dif_generate_and_verify_test ...[2024-11-18 04:46:52.139112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:28.877 [2024-11-18 04:46:52.139536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:28.877 [2024-11-18 04:46:52.139834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:28.877 [2024-11-18 04:46:52.140110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:28.877 [2024-11-18 04:46:52.140416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:28.877 [2024-11-18 04:46:52.140693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:28.877 passed 00:06:28.877 Test: dif_disable_check_test ...[2024-11-18 04:46:52.141743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:28.877 [2024-11-18 04:46:52.142039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:28.877 [2024-11-18 04:46:52.142345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:28.877 passed 00:06:28.877 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-11-18 04:46:52.143447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:06:28.877 [2024-11-18 04:46:52.143765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:06:28.877 [2024-11-18 04:46:52.144077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:06:28.877 [2024-11-18 04:46:52.144448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:06:28.877 [2024-11-18 04:46:52.144750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:28.877 [2024-11-18 04:46:52.145055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:28.877 [2024-11-18 04:46:52.145373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:28.877 [2024-11-18 04:46:52.145688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:28.877 [2024-11-18 04:46:52.146022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:28.877 [2024-11-18 04:46:52.146377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:28.877 [2024-11-18 04:46:52.146676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:28.877 passed 00:06:28.877 Test: dif_apptag_mask_test ...[2024-11-18 04:46:52.146989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:28.877 [2024-11-18 04:46:52.147302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:28.877 passed 00:06:28.877 Test: dif_sec_512_md_0_error_test ...[2024-11-18 04:46:52.147489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:28.877 passed 00:06:28.877 Test: dif_sec_4096_md_0_error_test ...passed 00:06:28.877 Test: dif_sec_4100_md_128_error_test ...passed 00:06:28.877 Test: dif_guard_seed_test ...[2024-11-18 04:46:52.147527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:28.877 [2024-11-18 04:46:52.147560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:28.877 [2024-11-18 04:46:52.147603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:28.877 [2024-11-18 04:46:52.147637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:28.877 passed 00:06:28.877 Test: dif_guard_value_test ...passed 00:06:28.877 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:06:28.877 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:06:28.877 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:28.877 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:28.877 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:28.877 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:06:28.877 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:28.877 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:28.877 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:06:28.877 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:28.877 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:06:28.877 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:06:28.877 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:28.877 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:28.877 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:28.877 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:28.877 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:28.877 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:28.877 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 04:46:52.192055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fc4c, Actual=fd4c 00:06:28.877 [2024-11-18 04:46:52.194552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=ff21, Actual=fe21 00:06:28.877 [2024-11-18 04:46:52.197008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.877 [2024-11-18 04:46:52.199494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.877 [2024-11-18 04:46:52.201940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.877 [2024-11-18 04:46:52.204403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.877 [2024-11-18 04:46:52.206825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=1996 00:06:28.877 [2024-11-18 04:46:52.208595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fe21, Actual=c3d4 00:06:28.877 [2024-11-18 04:46:52.210351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1bb753ed, Actual=1ab753ed 00:06:28.877 [2024-11-18 04:46:52.212799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=39574660, Actual=38574660 00:06:28.877 [2024-11-18 04:46:52.215244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.877 [2024-11-18 04:46:52.217688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.877 [2024-11-18 04:46:52.220152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.877 [2024-11-18 04:46:52.222621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.877 [2024-11-18 04:46:52.225074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=b8ad4330 00:06:28.877 [2024-11-18 04:46:52.226831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=38574660, Actual=133c15e8 00:06:28.877 [2024-11-18 04:46:52.228563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728fcc20d3, Actual=a576a7728ecc20d3 00:06:28.877 [2024-11-18 04:46:52.231006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=88010a2d4937a266, Actual=88010a2d4837a266 00:06:28.877 [2024-11-18 04:46:52.233455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.877 [2024-11-18 04:46:52.235925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.877 [2024-11-18 04:46:52.238404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.877 [2024-11-18 04:46:52.240850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.877 [2024-11-18 04:46:52.243294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=4d663ac2552044a7 00:06:28.877 [2024-11-18 04:46:52.245032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=88010a2d4837a266, Actual=748bbb6d85fb0f20 00:06:28.877 passed 00:06:28.877 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-11-18 04:46:52.245916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:28.877 [2024-11-18 04:46:52.246253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:28.877 [2024-11-18 04:46:52.246533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.877 [2024-11-18 04:46:52.246821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.877 [2024-11-18 04:46:52.247109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.877 [2024-11-18 04:46:52.247426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.877 [2024-11-18 04:46:52.247712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1996 00:06:28.878 [2024-11-18 04:46:52.247972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c3d4 00:06:28.878 [2024-11-18 04:46:52.248237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:06:28.878 [2024-11-18 04:46:52.248543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=39574660, Actual=38574660 00:06:28.878 [2024-11-18 04:46:52.248836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.249112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.249402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.249693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.249993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b8ad4330 00:06:28.878 [2024-11-18 04:46:52.250261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=133c15e8 00:06:28.878 [2024-11-18 04:46:52.250533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728fcc20d3, Actual=a576a7728ecc20d3 00:06:28.878 [2024-11-18 04:46:52.250824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4937a266, Actual=88010a2d4837a266 00:06:28.878 [2024-11-18 04:46:52.251113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.251417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.251698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.252059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.252381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4d663ac2552044a7 00:06:28.878 [2024-11-18 04:46:52.252657] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=748bbb6d85fb0f20 00:06:28.878 passed 00:06:28.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-11-18 04:46:52.252957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:28.878 [2024-11-18 04:46:52.253260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:28.878 [2024-11-18 04:46:52.253569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.253881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.254173] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.254495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.254788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1996 00:06:28.878 [2024-11-18 04:46:52.255038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c3d4 00:06:28.878 [2024-11-18 04:46:52.255319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:06:28.878 [2024-11-18 04:46:52.255617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=39574660, Actual=38574660 00:06:28.878 [2024-11-18 04:46:52.255908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.256225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.256507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.256812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.257085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b8ad4330 00:06:28.878 [2024-11-18 04:46:52.257370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=133c15e8 00:06:28.878 [2024-11-18 04:46:52.257628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728fcc20d3, Actual=a576a7728ecc20d3 00:06:28.878 [2024-11-18 04:46:52.257943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4937a266, Actual=88010a2d4837a266 00:06:28.878 [2024-11-18 04:46:52.258254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.258554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.258863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.259145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.259452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4d663ac2552044a7 00:06:28.878 [2024-11-18 04:46:52.259734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=748bbb6d85fb0f20 00:06:28.878 passed 00:06:28.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-11-18 04:46:52.260019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:28.878 [2024-11-18 04:46:52.260326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:28.878 [2024-11-18 04:46:52.260615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.260911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.261244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.261539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.261843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1996 00:06:28.878 [2024-11-18 04:46:52.262118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c3d4 00:06:28.878 [2024-11-18 04:46:52.262396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:06:28.878 [2024-11-18 04:46:52.262707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=39574660, Actual=38574660 00:06:28.878 [2024-11-18 04:46:52.263000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.263311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.263607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.263902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.264177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b8ad4330 00:06:28.878 [2024-11-18 04:46:52.264462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=133c15e8 00:06:28.878 [2024-11-18 04:46:52.264719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728fcc20d3, Actual=a576a7728ecc20d3 00:06:28.878 [2024-11-18 04:46:52.265000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4937a266, Actual=88010a2d4837a266 00:06:28.878 [2024-11-18 04:46:52.265322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.265623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.265931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.266281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.266576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4d663ac2552044a7 00:06:28.878 [2024-11-18 04:46:52.266850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=748bbb6d85fb0f20 00:06:28.878 passed 00:06:28.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-11-18 04:46:52.267135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:28.878 [2024-11-18 04:46:52.267437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:28.878 [2024-11-18 04:46:52.267730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.268019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.878 [2024-11-18 04:46:52.268315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.268612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.878 [2024-11-18 04:46:52.268898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1996 00:06:28.878 [2024-11-18 04:46:52.269168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c3d4 00:06:28.878 passed 00:06:28.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-11-18 04:46:52.269489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:06:28.878 [2024-11-18 04:46:52.269804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=39574660, Actual=38574660 00:06:28.879 [2024-11-18 04:46:52.270093] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.270414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.270707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.270999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.271293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b8ad4330 00:06:28.879 [2024-11-18 04:46:52.271563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=133c15e8 00:06:28.879 [2024-11-18 04:46:52.271857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728fcc20d3, Actual=a576a7728ecc20d3 00:06:28.879 [2024-11-18 04:46:52.272157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4937a266, Actual=88010a2d4837a266 00:06:28.879 [2024-11-18 04:46:52.272466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.272776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.273065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.273386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.273673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4d663ac2552044a7 00:06:28.879 [2024-11-18 04:46:52.273953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=748bbb6d85fb0f20 00:06:28.879 passed 00:06:28.879 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-11-18 04:46:52.274287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:28.879 [2024-11-18 04:46:52.274580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:28.879 [2024-11-18 04:46:52.274856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.275154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.275454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.275764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.276042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1996 00:06:28.879 [2024-11-18 04:46:52.276321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c3d4 00:06:28.879 passed 00:06:28.879 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-11-18 04:46:52.276632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:06:28.879 [2024-11-18 04:46:52.276918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=39574660, Actual=38574660 00:06:28.879 [2024-11-18 04:46:52.277225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.277537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.277834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.278139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.278449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b8ad4330 00:06:28.879 [2024-11-18 04:46:52.278708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=133c15e8 00:06:28.879 [2024-11-18 04:46:52.279023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728fcc20d3, Actual=a576a7728ecc20d3 00:06:28.879 [2024-11-18 04:46:52.279331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4937a266, Actual=88010a2d4837a266 00:06:28.879 [2024-11-18 04:46:52.279609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.279907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.280206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.280496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.280781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4d663ac2552044a7 00:06:28.879 [2024-11-18 04:46:52.281036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=748bbb6d85fb0f20 00:06:28.879 passed 00:06:28.879 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:06:28.879 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:28.879 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:28.879 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:28.879 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:28.879 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:28.879 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:28.879 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:28.879 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:28.879 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 04:46:52.325310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fc4c, Actual=fd4c 00:06:28.879 [2024-11-18 04:46:52.326443] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=147e, Actual=157e 00:06:28.879 [2024-11-18 04:46:52.327542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.328638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.329768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.879 [2024-11-18 04:46:52.330898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.879 [2024-11-18 04:46:52.332006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=1996 00:06:28.879 [2024-11-18 04:46:52.333116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4e97, Actual=7362 00:06:28.879 [2024-11-18 04:46:52.334255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1bb753ed, Actual=1ab753ed 00:06:28.879 [2024-11-18 04:46:52.335364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=ff6969, Actual=1ff6969 00:06:28.879 [2024-11-18 04:46:52.336454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.337571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.338703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.879 [2024-11-18 04:46:52.339814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.879 [2024-11-18 04:46:52.340924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=b8ad4330 00:06:28.879 [2024-11-18 04:46:52.342050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=2b267559, Actual=4d26d1 00:06:28.879 [2024-11-18 04:46:52.343158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728fcc20d3, Actual=a576a7728ecc20d3 00:06:28.879 [2024-11-18 04:46:52.344271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=62a2aed72767cc90, Actual=62a2aed72667cc90 00:06:28.879 [2024-11-18 04:46:52.345389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.346505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.347592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.879 [2024-11-18 04:46:52.348696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:28.879 [2024-11-18 04:46:52.349807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=4d663ac2552044a7 00:06:28.879 [2024-11-18 04:46:52.350927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=cd8425887035b4fd, Actual=310e94c8bdf919bb 00:06:28.879 passed 00:06:28.879 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-18 04:46:52.351289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:28.879 [2024-11-18 04:46:52.351559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=77ff, Actual=76ff 00:06:28.879 [2024-11-18 04:46:52.351806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.352069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.879 [2024-11-18 04:46:52.352333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.352583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.879 [2024-11-18 04:46:52.352843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1996 00:06:28.879 [2024-11-18 04:46:52.353112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=10e3 00:06:28.880 [2024-11-18 04:46:52.353376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:06:28.880 [2024-11-18 04:46:52.353628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c17f5c9c, Actual=c07f5c9c 00:06:28.880 [2024-11-18 04:46:52.353902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.880 [2024-11-18 04:46:52.354152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.880 [2024-11-18 04:46:52.354447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.880 [2024-11-18 04:46:52.354710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.880 [2024-11-18 04:46:52.354968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b8ad4330 00:06:28.880 [2024-11-18 04:46:52.355224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=c1cd1324 00:06:28.880 [2024-11-18 04:46:52.355486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728fcc20d3, Actual=a576a7728ecc20d3 00:06:28.880 [2024-11-18 04:46:52.355746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9740a144018848cf, Actual=9740a144008848cf 00:06:28.880 [2024-11-18 04:46:52.355994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.880 [2024-11-18 04:46:52.356285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:28.880 [2024-11-18 04:46:52.356558] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.880 [2024-11-18 04:46:52.356806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:28.880 [2024-11-18 04:46:52.357072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4d663ac2552044a7 00:06:28.880 [2024-11-18 04:46:52.357359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=c4ec9b5b9b169de4 00:06:28.880 passed 00:06:28.880 Test: dix_sec_512_md_0_error ...passed 00:06:28.880 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-11-18 04:46:52.357418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:28.880 passed 00:06:28.880 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:28.880 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:28.880 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:29.140 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:29.140 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:29.140 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:29.140 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:29.140 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:29.140 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 04:46:52.401984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fc4c, Actual=fd4c 00:06:29.140 [2024-11-18 04:46:52.403108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=147e, Actual=157e 00:06:29.140 [2024-11-18 04:46:52.404233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:29.140 [2024-11-18 04:46:52.405320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:29.140 [2024-11-18 04:46:52.406454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:29.140 [2024-11-18 04:46:52.407572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:29.140 [2024-11-18 04:46:52.408678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=1996 00:06:29.140 [2024-11-18 04:46:52.409808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4e97, Actual=7362 00:06:29.140 [2024-11-18 04:46:52.410947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1bb753ed, Actual=1ab753ed 00:06:29.140 [2024-11-18 04:46:52.412045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=ff6969, Actual=1ff6969 00:06:29.140 [2024-11-18 04:46:52.413182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:29.140 [2024-11-18 04:46:52.414318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:29.140 [2024-11-18 04:46:52.415416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:29.140 [2024-11-18 04:46:52.416506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:29.140 [2024-11-18 04:46:52.417619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=b8ad4330 00:06:29.140 [2024-11-18 04:46:52.418725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=2b267559, Actual=4d26d1 00:06:29.140 [2024-11-18 04:46:52.419867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728fcc20d3, Actual=a576a7728ecc20d3 00:06:29.140 [2024-11-18 04:46:52.420990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=62a2aed72767cc90, Actual=62a2aed72667cc90 00:06:29.140 [2024-11-18 04:46:52.422122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:29.140 [2024-11-18 04:46:52.423238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=188 00:06:29.140 [2024-11-18 04:46:52.424341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:29.140 [2024-11-18 04:46:52.425432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100005c 00:06:29.140 [2024-11-18 04:46:52.426531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=4d663ac2552044a7 00:06:29.140 passed 00:06:29.140 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-18 04:46:52.427631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=cd8425887035b4fd, Actual=310e94c8bdf919bb 00:06:29.140 [2024-11-18 04:46:52.427960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:29.140 [2024-11-18 04:46:52.428228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=77ff, Actual=76ff 00:06:29.140 [2024-11-18 04:46:52.428482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:29.140 [2024-11-18 04:46:52.428732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:29.140 [2024-11-18 04:46:52.428998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:29.141 [2024-11-18 04:46:52.429258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:29.141 [2024-11-18 04:46:52.429510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1996 00:06:29.141 [2024-11-18 04:46:52.429760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=10e3 00:06:29.141 [2024-11-18 04:46:52.430015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1bb753ed, Actual=1ab753ed 00:06:29.141 [2024-11-18 04:46:52.430295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c17f5c9c, Actual=c07f5c9c 00:06:29.141 [2024-11-18 04:46:52.430573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:29.141 [2024-11-18 04:46:52.430827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:29.141 [2024-11-18 04:46:52.431072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:29.141 [2024-11-18 04:46:52.431337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:29.141 [2024-11-18 04:46:52.431605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b8ad4330 00:06:29.141 [2024-11-18 04:46:52.431866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=c1cd1324 00:06:29.141 [2024-11-18 04:46:52.432132] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728fcc20d3, Actual=a576a7728ecc20d3 00:06:29.141 [2024-11-18 04:46:52.432389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9740a144018848cf, Actual=9740a144008848cf 00:06:29.141 [2024-11-18 04:46:52.432642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:29.141 [2024-11-18 04:46:52.432886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:29.141 [2024-11-18 04:46:52.433150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:29.141 [2024-11-18 04:46:52.433436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000058 00:06:29.141 [2024-11-18 04:46:52.433691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4d663ac2552044a7 00:06:29.141 [2024-11-18 04:46:52.433956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=c4ec9b5b9b169de4 00:06:29.141 passed 00:06:29.141 Test: set_md_interleave_iovs_test ...passed 00:06:29.141 Test: set_md_interleave_iovs_split_test ...passed 00:06:29.141 Test: dif_generate_stream_pi_16_test ...passed 00:06:29.141 Test: dif_generate_stream_test ...passed 00:06:29.141 Test: set_md_interleave_iovs_alignment_test ...passed 00:06:29.141 Test: dif_generate_split_test ...[2024-11-18 04:46:52.439879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:06:29.141 passed 00:06:29.141 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:06:29.141 Test: dif_verify_split_test ...passed 00:06:29.141 Test: dif_verify_stream_multi_segments_test ...passed 00:06:29.141 Test: update_crc32c_pi_16_test ...passed 00:06:29.141 Test: update_crc32c_test ...passed 00:06:29.141 Test: dif_update_crc32c_split_test ...passed 00:06:29.141 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:06:29.141 Test: get_range_with_md_test ...passed 00:06:29.141 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:06:29.141 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:06:29.141 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:29.141 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:06:29.141 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:06:29.141 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:29.141 Test: dif_generate_and_verify_unmap_test ...passed 00:06:29.141 00:06:29.141 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.141 suites 1 1 n/a 0 0 00:06:29.141 tests 79 79 79 0 0 00:06:29.141 asserts 3584 3584 3584 0 n/a 00:06:29.141 00:06:29.141 Elapsed time = 0.338 seconds 00:06:29.141 04:46:52 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:06:29.141 00:06:29.141 00:06:29.141 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.141 http://cunit.sourceforge.net/ 00:06:29.141 00:06:29.141 00:06:29.141 Suite: iov 00:06:29.141 Test: test_single_iov ...passed 00:06:29.141 Test: test_simple_iov ...passed 00:06:29.141 Test: test_complex_iov ...passed 00:06:29.141 Test: test_iovs_to_buf ...passed 00:06:29.141 Test: test_buf_to_iovs ...passed 00:06:29.141 Test: test_memset ...passed 00:06:29.141 Test: test_iov_one ...passed 00:06:29.141 Test: test_iov_xfer ...passed 00:06:29.141 00:06:29.141 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.141 suites 1 1 n/a 0 0 00:06:29.141 tests 8 8 8 0 0 00:06:29.141 asserts 156 156 156 0 n/a 00:06:29.141 00:06:29.141 Elapsed time = 0.000 seconds 00:06:29.141 04:46:52 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:06:29.141 00:06:29.141 00:06:29.141 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.141 http://cunit.sourceforge.net/ 00:06:29.141 00:06:29.141 00:06:29.141 Suite: math 00:06:29.141 Test: test_serial_number_arithmetic ...passed 00:06:29.141 Suite: erase 00:06:29.141 Test: test_memset_s ...passed 00:06:29.141 00:06:29.141 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.141 suites 2 2 n/a 0 0 00:06:29.141 tests 2 2 2 0 0 00:06:29.141 asserts 18 18 18 0 n/a 00:06:29.141 00:06:29.141 Elapsed time = 0.000 seconds 00:06:29.141 04:46:52 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:06:29.141 00:06:29.141 00:06:29.141 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.141 http://cunit.sourceforge.net/ 00:06:29.141 00:06:29.141 00:06:29.141 Suite: pipe 00:06:29.141 Test: test_create_destroy ...passed 00:06:29.141 Test: test_write_get_buffer ...passed 00:06:29.141 Test: test_write_advance ...passed 00:06:29.141 Test: test_read_get_buffer ...passed 00:06:29.141 Test: test_read_advance ...passed 00:06:29.141 Test: test_data ...passed 00:06:29.141 00:06:29.141 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.141 suites 1 1 n/a 0 0 00:06:29.141 tests 6 6 6 0 0 00:06:29.141 asserts 250 250 250 0 n/a 00:06:29.141 00:06:29.141 Elapsed time = 0.000 seconds 00:06:29.141 04:46:52 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:06:29.141 00:06:29.141 00:06:29.141 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.141 http://cunit.sourceforge.net/ 00:06:29.141 00:06:29.141 00:06:29.141 Suite: xor 00:06:29.141 Test: test_xor_gen ...passed 00:06:29.141 00:06:29.141 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.141 suites 1 1 n/a 0 0 00:06:29.141 tests 1 1 1 0 0 00:06:29.141 asserts 17 17 17 0 n/a 00:06:29.141 00:06:29.141 Elapsed time = 0.006 seconds 00:06:29.141 00:06:29.141 real 0m0.673s 00:06:29.141 user 0m0.478s 00:06:29.141 sys 0m0.199s 00:06:29.141 04:46:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.141 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.141 ************************************ 00:06:29.141 END TEST unittest_util 00:06:29.141 ************************************ 00:06:29.141 04:46:52 -- unit/unittest.sh@258 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:29.141 04:46:52 -- unit/unittest.sh@259 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:29.141 04:46:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.141 04:46:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.141 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.141 ************************************ 00:06:29.141 START TEST unittest_vhost 00:06:29.141 ************************************ 00:06:29.141 04:46:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:29.401 00:06:29.401 00:06:29.401 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.401 http://cunit.sourceforge.net/ 00:06:29.401 00:06:29.401 00:06:29.401 Suite: vhost_suite 00:06:29.401 Test: desc_to_iov_test ...[2024-11-18 04:46:52.673484] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:06:29.401 passed 00:06:29.401 Test: create_controller_test ...[2024-11-18 04:46:52.678670] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:29.401 [2024-11-18 04:46:52.678792] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:06:29.401 [2024-11-18 04:46:52.678908] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:29.401 [2024-11-18 04:46:52.678997] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:06:29.401 [2024-11-18 04:46:52.679040] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:06:29.401 [2024-11-18 04:46:52.679120] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-11-18 04:46:52.680355] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:06:29.401 passed 00:06:29.401 Test: session_find_by_vid_test ...passed 00:06:29.401 Test: remove_controller_test ...[2024-11-18 04:46:52.682868] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:06:29.401 passed 00:06:29.401 Test: vq_avail_ring_get_test ...passed 00:06:29.401 Test: vq_packed_ring_test ...passed 00:06:29.401 Test: vhost_blk_construct_test ...passed 00:06:29.401 00:06:29.401 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.401 suites 1 1 n/a 0 0 00:06:29.402 tests 7 7 7 0 0 00:06:29.402 asserts 145 145 145 0 n/a 00:06:29.402 00:06:29.402 Elapsed time = 0.014 seconds 00:06:29.402 00:06:29.402 real 0m0.057s 00:06:29.402 user 0m0.034s 00:06:29.402 sys 0m0.023s 00:06:29.402 ************************************ 00:06:29.402 04:46:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.402 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.402 END TEST unittest_vhost 00:06:29.402 ************************************ 00:06:29.402 04:46:52 -- unit/unittest.sh@261 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:29.402 04:46:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.402 04:46:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.402 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.402 ************************************ 00:06:29.402 START TEST unittest_dma 00:06:29.402 ************************************ 00:06:29.402 04:46:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:29.402 00:06:29.402 00:06:29.402 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.402 http://cunit.sourceforge.net/ 00:06:29.402 00:06:29.402 00:06:29.402 Suite: dma_suite 00:06:29.402 Test: test_dma ...passed 00:06:29.402 00:06:29.402 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.402 suites 1 1 n/a 0 0 00:06:29.402 tests 1 1 1 0 0 00:06:29.402 asserts 50 50 50 0 n/a 00:06:29.402 00:06:29.402 Elapsed time = 0.000 seconds 00:06:29.402 [2024-11-18 04:46:52.777090] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:06:29.402 00:06:29.402 real 0m0.030s 00:06:29.402 user 0m0.015s 00:06:29.402 sys 0m0.016s 00:06:29.402 04:46:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.402 ************************************ 00:06:29.402 END TEST unittest_dma 00:06:29.402 ************************************ 00:06:29.402 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.402 04:46:52 -- unit/unittest.sh@263 -- # run_test unittest_init unittest_init 00:06:29.402 04:46:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.402 04:46:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.402 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.402 ************************************ 00:06:29.402 START TEST unittest_init 00:06:29.402 ************************************ 00:06:29.402 04:46:52 -- common/autotest_common.sh@1114 -- # unittest_init 00:06:29.402 04:46:52 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:06:29.402 00:06:29.402 00:06:29.402 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.402 http://cunit.sourceforge.net/ 00:06:29.402 00:06:29.402 00:06:29.402 Suite: subsystem_suite 00:06:29.402 Test: subsystem_sort_test_depends_on_single ...passed 00:06:29.402 Test: subsystem_sort_test_depends_on_multiple ...passed 00:06:29.402 Test: subsystem_sort_test_missing_dependency ...passed 00:06:29.402 00:06:29.402 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.402 suites 1 1 n/a 0 0 00:06:29.402 tests 3 3 3 0 0 00:06:29.402 asserts 20 20 20 0 n/a 00:06:29.402 00:06:29.402 Elapsed time = 0.000 seconds 00:06:29.402 [2024-11-18 04:46:52.865957] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:06:29.402 [2024-11-18 04:46:52.866267] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:06:29.402 00:06:29.402 real 0m0.035s 00:06:29.402 user 0m0.017s 00:06:29.402 sys 0m0.019s 00:06:29.402 04:46:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.402 ************************************ 00:06:29.402 END TEST unittest_init 00:06:29.402 ************************************ 00:06:29.402 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.661 04:46:52 -- unit/unittest.sh@265 -- # [[ y == y ]] 00:06:29.661 04:46:52 -- unit/unittest.sh@266 -- # hostname 00:06:29.661 04:46:52 -- unit/unittest.sh@266 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:29.661 geninfo: WARNING: invalid characters removed from testname! 00:07:01.750 04:47:22 -- unit/unittest.sh@267 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:07:03.657 04:47:27 -- unit/unittest.sh@268 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:06.947 04:47:29 -- unit/unittest.sh@269 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:09.500 04:47:32 -- unit/unittest.sh@270 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:12.042 04:47:35 -- unit/unittest.sh@271 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:14.576 04:47:37 -- unit/unittest.sh@272 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:16.481 04:47:39 -- unit/unittest.sh@273 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:16.481 04:47:39 -- unit/unittest.sh@274 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:17.419 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:17.419 Found 313 entries. 00:07:17.419 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:07:17.419 Writing .css and .png files. 00:07:17.419 Generating output. 00:07:17.419 Processing file include/linux/virtio_ring.h 00:07:17.678 Processing file include/spdk/thread.h 00:07:17.678 Processing file include/spdk/base64.h 00:07:17.678 Processing file include/spdk/nvme.h 00:07:17.678 Processing file include/spdk/bdev_module.h 00:07:17.678 Processing file include/spdk/mmio.h 00:07:17.678 Processing file include/spdk/endian.h 00:07:17.678 Processing file include/spdk/nvme_spec.h 00:07:17.678 Processing file include/spdk/util.h 00:07:17.678 Processing file include/spdk/histogram_data.h 00:07:17.678 Processing file include/spdk/trace.h 00:07:17.678 Processing file include/spdk/nvmf_transport.h 00:07:17.678 Processing file include/spdk_internal/sgl.h 00:07:17.678 Processing file include/spdk_internal/virtio.h 00:07:17.678 Processing file include/spdk_internal/utf.h 00:07:17.678 Processing file include/spdk_internal/sock.h 00:07:17.678 Processing file include/spdk_internal/nvme_tcp.h 00:07:17.678 Processing file include/spdk_internal/rdma.h 00:07:17.937 Processing file lib/accel/accel_rpc.c 00:07:17.937 Processing file lib/accel/accel_sw.c 00:07:17.937 Processing file lib/accel/accel.c 00:07:18.196 Processing file lib/bdev/bdev.c 00:07:18.196 Processing file lib/bdev/part.c 00:07:18.196 Processing file lib/bdev/bdev_rpc.c 00:07:18.196 Processing file lib/bdev/bdev_zone.c 00:07:18.196 Processing file lib/bdev/scsi_nvme.c 00:07:18.456 Processing file lib/blob/request.c 00:07:18.456 Processing file lib/blob/blob_bs_dev.c 00:07:18.456 Processing file lib/blob/blobstore.h 00:07:18.456 Processing file lib/blob/zeroes.c 00:07:18.456 Processing file lib/blob/blobstore.c 00:07:18.456 Processing file lib/blobfs/tree.c 00:07:18.456 Processing file lib/blobfs/blobfs.c 00:07:18.715 Processing file lib/conf/conf.c 00:07:18.715 Processing file lib/dma/dma.c 00:07:18.975 Processing file lib/env_dpdk/pci.c 00:07:18.975 Processing file lib/env_dpdk/pci_event.c 00:07:18.975 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:07:18.975 Processing file lib/env_dpdk/pci_ioat.c 00:07:18.975 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:07:18.975 Processing file lib/env_dpdk/pci_dpdk.c 00:07:18.975 Processing file lib/env_dpdk/threads.c 00:07:18.975 Processing file lib/env_dpdk/pci_vmd.c 00:07:18.975 Processing file lib/env_dpdk/env.c 00:07:18.975 Processing file lib/env_dpdk/init.c 00:07:18.975 Processing file lib/env_dpdk/memory.c 00:07:18.975 Processing file lib/env_dpdk/pci_virtio.c 00:07:18.975 Processing file lib/env_dpdk/pci_idxd.c 00:07:18.975 Processing file lib/env_dpdk/sigbus_handler.c 00:07:18.975 Processing file lib/event/log_rpc.c 00:07:18.975 Processing file lib/event/scheduler_static.c 00:07:18.975 Processing file lib/event/app_rpc.c 00:07:18.975 Processing file lib/event/app.c 00:07:18.975 Processing file lib/event/reactor.c 00:07:19.544 Processing file lib/ftl/ftl_debug.c 00:07:19.544 Processing file lib/ftl/ftl_io.h 00:07:19.544 Processing file lib/ftl/ftl_io.c 00:07:19.544 Processing file lib/ftl/ftl_nv_cache_io.h 00:07:19.544 Processing file lib/ftl/ftl_debug.h 00:07:19.544 Processing file lib/ftl/ftl_band_ops.c 00:07:19.544 Processing file lib/ftl/ftl_band.h 00:07:19.544 Processing file lib/ftl/ftl_l2p_cache.c 00:07:19.544 Processing file lib/ftl/ftl_l2p.c 00:07:19.544 Processing file lib/ftl/ftl_p2l.c 00:07:19.544 Processing file lib/ftl/ftl_trace.c 00:07:19.544 Processing file lib/ftl/ftl_nv_cache.c 00:07:19.544 Processing file lib/ftl/ftl_l2p_flat.c 00:07:19.544 Processing file lib/ftl/ftl_nv_cache.h 00:07:19.544 Processing file lib/ftl/ftl_layout.c 00:07:19.544 Processing file lib/ftl/ftl_reloc.c 00:07:19.544 Processing file lib/ftl/ftl_core.c 00:07:19.544 Processing file lib/ftl/ftl_writer.h 00:07:19.544 Processing file lib/ftl/ftl_rq.c 00:07:19.544 Processing file lib/ftl/ftl_init.c 00:07:19.544 Processing file lib/ftl/ftl_sb.c 00:07:19.544 Processing file lib/ftl/ftl_core.h 00:07:19.544 Processing file lib/ftl/ftl_writer.c 00:07:19.544 Processing file lib/ftl/ftl_band.c 00:07:19.544 Processing file lib/ftl/base/ftl_base_bdev.c 00:07:19.544 Processing file lib/ftl/base/ftl_base_dev.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:07:19.804 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:07:20.063 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:07:20.063 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:07:20.063 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:07:20.063 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:07:20.063 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:07:20.063 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:07:20.323 Processing file lib/ftl/utils/ftl_property.c 00:07:20.323 Processing file lib/ftl/utils/ftl_mempool.c 00:07:20.323 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:07:20.323 Processing file lib/ftl/utils/ftl_conf.c 00:07:20.323 Processing file lib/ftl/utils/ftl_bitmap.c 00:07:20.323 Processing file lib/ftl/utils/ftl_md.c 00:07:20.323 Processing file lib/ftl/utils/ftl_property.h 00:07:20.323 Processing file lib/ftl/utils/ftl_df.h 00:07:20.323 Processing file lib/ftl/utils/ftl_addr_utils.h 00:07:20.323 Processing file lib/idxd/idxd_user.c 00:07:20.323 Processing file lib/idxd/idxd_kernel.c 00:07:20.323 Processing file lib/idxd/idxd.c 00:07:20.323 Processing file lib/idxd/idxd_internal.h 00:07:20.583 Processing file lib/init/json_config.c 00:07:20.583 Processing file lib/init/rpc.c 00:07:20.583 Processing file lib/init/subsystem_rpc.c 00:07:20.583 Processing file lib/init/subsystem.c 00:07:20.583 Processing file lib/ioat/ioat_internal.h 00:07:20.583 Processing file lib/ioat/ioat.c 00:07:20.842 Processing file lib/iscsi/conn.c 00:07:20.842 Processing file lib/iscsi/md5.c 00:07:20.842 Processing file lib/iscsi/iscsi.h 00:07:20.842 Processing file lib/iscsi/param.c 00:07:20.842 Processing file lib/iscsi/iscsi_subsystem.c 00:07:20.842 Processing file lib/iscsi/tgt_node.c 00:07:20.842 Processing file lib/iscsi/portal_grp.c 00:07:20.842 Processing file lib/iscsi/iscsi_rpc.c 00:07:20.842 Processing file lib/iscsi/task.h 00:07:20.842 Processing file lib/iscsi/init_grp.c 00:07:20.842 Processing file lib/iscsi/iscsi.c 00:07:20.842 Processing file lib/iscsi/task.c 00:07:21.101 Processing file lib/json/json_parse.c 00:07:21.101 Processing file lib/json/json_util.c 00:07:21.101 Processing file lib/json/json_write.c 00:07:21.101 Processing file lib/jsonrpc/jsonrpc_server.c 00:07:21.101 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:07:21.101 Processing file lib/jsonrpc/jsonrpc_client.c 00:07:21.101 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:07:21.361 Processing file lib/log/log_deprecated.c 00:07:21.361 Processing file lib/log/log.c 00:07:21.361 Processing file lib/log/log_flags.c 00:07:21.361 Processing file lib/lvol/lvol.c 00:07:21.361 Processing file lib/nbd/nbd.c 00:07:21.361 Processing file lib/nbd/nbd_rpc.c 00:07:21.361 Processing file lib/notify/notify_rpc.c 00:07:21.361 Processing file lib/notify/notify.c 00:07:22.297 Processing file lib/nvme/nvme.c 00:07:22.297 Processing file lib/nvme/nvme_poll_group.c 00:07:22.297 Processing file lib/nvme/nvme_quirks.c 00:07:22.297 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:07:22.297 Processing file lib/nvme/nvme_opal.c 00:07:22.297 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:07:22.297 Processing file lib/nvme/nvme_tcp.c 00:07:22.297 Processing file lib/nvme/nvme_pcie.c 00:07:22.297 Processing file lib/nvme/nvme_zns.c 00:07:22.297 Processing file lib/nvme/nvme_pcie_common.c 00:07:22.297 Processing file lib/nvme/nvme_vfio_user.c 00:07:22.297 Processing file lib/nvme/nvme_ctrlr.c 00:07:22.297 Processing file lib/nvme/nvme_ns_cmd.c 00:07:22.297 Processing file lib/nvme/nvme_transport.c 00:07:22.297 Processing file lib/nvme/nvme_qpair.c 00:07:22.297 Processing file lib/nvme/nvme_ns.c 00:07:22.297 Processing file lib/nvme/nvme_rdma.c 00:07:22.297 Processing file lib/nvme/nvme_cuse.c 00:07:22.297 Processing file lib/nvme/nvme_io_msg.c 00:07:22.297 Processing file lib/nvme/nvme_discovery.c 00:07:22.297 Processing file lib/nvme/nvme_internal.h 00:07:22.297 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:07:22.297 Processing file lib/nvme/nvme_fabric.c 00:07:22.297 Processing file lib/nvme/nvme_pcie_internal.h 00:07:22.865 Processing file lib/nvmf/nvmf.c 00:07:22.865 Processing file lib/nvmf/subsystem.c 00:07:22.865 Processing file lib/nvmf/nvmf_rpc.c 00:07:22.865 Processing file lib/nvmf/ctrlr_discovery.c 00:07:22.865 Processing file lib/nvmf/rdma.c 00:07:22.865 Processing file lib/nvmf/nvmf_internal.h 00:07:22.865 Processing file lib/nvmf/transport.c 00:07:22.865 Processing file lib/nvmf/ctrlr.c 00:07:22.865 Processing file lib/nvmf/ctrlr_bdev.c 00:07:22.865 Processing file lib/nvmf/tcp.c 00:07:22.865 Processing file lib/rdma/common.c 00:07:22.865 Processing file lib/rdma/rdma_verbs.c 00:07:22.865 Processing file lib/rpc/rpc.c 00:07:23.124 Processing file lib/scsi/lun.c 00:07:23.124 Processing file lib/scsi/scsi_rpc.c 00:07:23.124 Processing file lib/scsi/scsi.c 00:07:23.124 Processing file lib/scsi/dev.c 00:07:23.124 Processing file lib/scsi/port.c 00:07:23.124 Processing file lib/scsi/task.c 00:07:23.124 Processing file lib/scsi/scsi_pr.c 00:07:23.124 Processing file lib/scsi/scsi_bdev.c 00:07:23.124 Processing file lib/sock/sock_rpc.c 00:07:23.124 Processing file lib/sock/sock.c 00:07:23.124 Processing file lib/thread/thread.c 00:07:23.124 Processing file lib/thread/iobuf.c 00:07:23.384 Processing file lib/trace/trace.c 00:07:23.384 Processing file lib/trace/trace_flags.c 00:07:23.384 Processing file lib/trace/trace_rpc.c 00:07:23.384 Processing file lib/trace_parser/trace.cpp 00:07:23.384 Processing file lib/ublk/ublk.c 00:07:23.384 Processing file lib/ublk/ublk_rpc.c 00:07:23.643 Processing file lib/ut/ut.c 00:07:23.643 Processing file lib/ut_mock/mock.c 00:07:23.902 Processing file lib/util/file.c 00:07:23.902 Processing file lib/util/xor.c 00:07:23.902 Processing file lib/util/base64.c 00:07:23.902 Processing file lib/util/strerror_tls.c 00:07:23.902 Processing file lib/util/iov.c 00:07:23.902 Processing file lib/util/crc16.c 00:07:23.902 Processing file lib/util/hexlify.c 00:07:23.902 Processing file lib/util/crc32c.c 00:07:23.902 Processing file lib/util/crc64.c 00:07:23.902 Processing file lib/util/bit_array.c 00:07:23.902 Processing file lib/util/math.c 00:07:23.902 Processing file lib/util/crc32.c 00:07:23.902 Processing file lib/util/string.c 00:07:23.902 Processing file lib/util/pipe.c 00:07:23.902 Processing file lib/util/dif.c 00:07:23.902 Processing file lib/util/cpuset.c 00:07:23.902 Processing file lib/util/uuid.c 00:07:23.902 Processing file lib/util/fd_group.c 00:07:23.902 Processing file lib/util/crc32_ieee.c 00:07:23.902 Processing file lib/util/fd.c 00:07:23.902 Processing file lib/util/zipf.c 00:07:24.160 Processing file lib/vfio_user/host/vfio_user.c 00:07:24.160 Processing file lib/vfio_user/host/vfio_user_pci.c 00:07:24.160 Processing file lib/vhost/vhost_scsi.c 00:07:24.160 Processing file lib/vhost/vhost_blk.c 00:07:24.160 Processing file lib/vhost/vhost_internal.h 00:07:24.160 Processing file lib/vhost/vhost_rpc.c 00:07:24.160 Processing file lib/vhost/rte_vhost_user.c 00:07:24.160 Processing file lib/vhost/vhost.c 00:07:24.420 Processing file lib/virtio/virtio_pci.c 00:07:24.420 Processing file lib/virtio/virtio.c 00:07:24.420 Processing file lib/virtio/virtio_vhost_user.c 00:07:24.420 Processing file lib/virtio/virtio_vfio_user.c 00:07:24.420 Processing file lib/vmd/vmd.c 00:07:24.420 Processing file lib/vmd/led.c 00:07:24.681 Processing file module/accel/dsa/accel_dsa_rpc.c 00:07:24.681 Processing file module/accel/dsa/accel_dsa.c 00:07:24.681 Processing file module/accel/error/accel_error_rpc.c 00:07:24.681 Processing file module/accel/error/accel_error.c 00:07:24.681 Processing file module/accel/iaa/accel_iaa.c 00:07:24.681 Processing file module/accel/iaa/accel_iaa_rpc.c 00:07:24.681 Processing file module/accel/ioat/accel_ioat_rpc.c 00:07:24.681 Processing file module/accel/ioat/accel_ioat.c 00:07:25.018 Processing file module/bdev/aio/bdev_aio.c 00:07:25.018 Processing file module/bdev/aio/bdev_aio_rpc.c 00:07:25.018 Processing file module/bdev/delay/vbdev_delay.c 00:07:25.018 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:07:25.018 Processing file module/bdev/error/vbdev_error_rpc.c 00:07:25.018 Processing file module/bdev/error/vbdev_error.c 00:07:25.018 Processing file module/bdev/ftl/bdev_ftl.c 00:07:25.018 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:07:25.277 Processing file module/bdev/gpt/gpt.h 00:07:25.277 Processing file module/bdev/gpt/vbdev_gpt.c 00:07:25.277 Processing file module/bdev/gpt/gpt.c 00:07:25.277 Processing file module/bdev/iscsi/bdev_iscsi.c 00:07:25.277 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:07:25.277 Processing file module/bdev/lvol/vbdev_lvol.c 00:07:25.277 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:07:25.537 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:07:25.537 Processing file module/bdev/malloc/bdev_malloc.c 00:07:25.537 Processing file module/bdev/null/bdev_null.c 00:07:25.537 Processing file module/bdev/null/bdev_null_rpc.c 00:07:25.797 Processing file module/bdev/nvme/nvme_rpc.c 00:07:25.797 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:07:25.797 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:07:25.797 Processing file module/bdev/nvme/bdev_mdns_client.c 00:07:25.797 Processing file module/bdev/nvme/vbdev_opal.c 00:07:25.797 Processing file module/bdev/nvme/bdev_nvme.c 00:07:25.797 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:07:26.056 Processing file module/bdev/passthru/vbdev_passthru.c 00:07:26.056 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:07:26.316 Processing file module/bdev/raid/concat.c 00:07:26.316 Processing file module/bdev/raid/raid1.c 00:07:26.316 Processing file module/bdev/raid/raid5f.c 00:07:26.316 Processing file module/bdev/raid/bdev_raid_sb.c 00:07:26.316 Processing file module/bdev/raid/raid0.c 00:07:26.316 Processing file module/bdev/raid/bdev_raid.h 00:07:26.316 Processing file module/bdev/raid/bdev_raid_rpc.c 00:07:26.316 Processing file module/bdev/raid/bdev_raid.c 00:07:26.316 Processing file module/bdev/split/vbdev_split.c 00:07:26.316 Processing file module/bdev/split/vbdev_split_rpc.c 00:07:26.316 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:07:26.316 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:07:26.316 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:07:26.575 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:07:26.575 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:07:26.575 Processing file module/blob/bdev/blob_bdev.c 00:07:26.575 Processing file module/blobfs/bdev/blobfs_bdev.c 00:07:26.575 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:07:26.575 Processing file module/env_dpdk/env_dpdk_rpc.c 00:07:26.575 Processing file module/event/subsystems/accel/accel.c 00:07:26.834 Processing file module/event/subsystems/bdev/bdev.c 00:07:26.834 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:07:26.834 Processing file module/event/subsystems/iobuf/iobuf.c 00:07:26.834 Processing file module/event/subsystems/iscsi/iscsi.c 00:07:26.834 Processing file module/event/subsystems/nbd/nbd.c 00:07:26.834 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:07:26.834 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:07:27.092 Processing file module/event/subsystems/scheduler/scheduler.c 00:07:27.092 Processing file module/event/subsystems/scsi/scsi.c 00:07:27.092 Processing file module/event/subsystems/sock/sock.c 00:07:27.092 Processing file module/event/subsystems/ublk/ublk.c 00:07:27.092 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:07:27.352 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:07:27.352 Processing file module/event/subsystems/vmd/vmd.c 00:07:27.352 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:07:27.352 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:07:27.352 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:07:27.352 Processing file module/scheduler/gscheduler/gscheduler.c 00:07:27.611 Processing file module/sock/sock_kernel.h 00:07:27.611 Processing file module/sock/posix/posix.c 00:07:27.611 Writing directory view page. 00:07:27.611 Overall coverage rate: 00:07:27.611 lines......: 38.6% (39266 of 101740 lines) 00:07:27.611 functions..: 42.2% (3587 of 8494 functions) 00:07:27.611 04:47:50 -- unit/unittest.sh@277 -- # set +x 00:07:27.611 00:07:27.611 00:07:27.611 ===================== 00:07:27.611 All unit tests passed 00:07:27.611 ===================== 00:07:27.611 WARN: lcov not installed or SPDK built without coverage! 00:07:27.611 00:07:27.611 00:07:27.611 00:07:27.611 real 3m3.250s 00:07:27.611 user 2m38.724s 00:07:27.611 sys 0m14.934s 00:07:27.611 04:47:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.611 04:47:50 -- common/autotest_common.sh@10 -- # set +x 00:07:27.611 ************************************ 00:07:27.611 END TEST unittest 00:07:27.611 ************************************ 00:07:27.611 04:47:51 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:07:27.611 04:47:51 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:07:27.611 04:47:51 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:07:27.611 04:47:51 -- spdk/autotest.sh@160 -- # timing_enter lib 00:07:27.611 04:47:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.611 04:47:51 -- common/autotest_common.sh@10 -- # set +x 00:07:27.611 04:47:51 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:27.611 04:47:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.611 04:47:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.611 04:47:51 -- common/autotest_common.sh@10 -- # set +x 00:07:27.611 ************************************ 00:07:27.611 START TEST env 00:07:27.611 ************************************ 00:07:27.611 04:47:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:27.871 * Looking for test storage... 00:07:27.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:27.871 04:47:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:27.871 04:47:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:27.871 04:47:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:27.871 04:47:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:27.871 04:47:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:27.871 04:47:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:27.871 04:47:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:27.871 04:47:51 -- scripts/common.sh@335 -- # IFS=.-: 00:07:27.871 04:47:51 -- scripts/common.sh@335 -- # read -ra ver1 00:07:27.871 04:47:51 -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.871 04:47:51 -- scripts/common.sh@336 -- # read -ra ver2 00:07:27.871 04:47:51 -- scripts/common.sh@337 -- # local 'op=<' 00:07:27.871 04:47:51 -- scripts/common.sh@339 -- # ver1_l=2 00:07:27.871 04:47:51 -- scripts/common.sh@340 -- # ver2_l=1 00:07:27.871 04:47:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:27.871 04:47:51 -- scripts/common.sh@343 -- # case "$op" in 00:07:27.871 04:47:51 -- scripts/common.sh@344 -- # : 1 00:07:27.871 04:47:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:27.871 04:47:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.871 04:47:51 -- scripts/common.sh@364 -- # decimal 1 00:07:27.871 04:47:51 -- scripts/common.sh@352 -- # local d=1 00:07:27.871 04:47:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.871 04:47:51 -- scripts/common.sh@354 -- # echo 1 00:07:27.871 04:47:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:27.871 04:47:51 -- scripts/common.sh@365 -- # decimal 2 00:07:27.871 04:47:51 -- scripts/common.sh@352 -- # local d=2 00:07:27.871 04:47:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.871 04:47:51 -- scripts/common.sh@354 -- # echo 2 00:07:27.871 04:47:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:27.871 04:47:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:27.871 04:47:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:27.871 04:47:51 -- scripts/common.sh@367 -- # return 0 00:07:27.871 04:47:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.871 04:47:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.871 --rc genhtml_branch_coverage=1 00:07:27.871 --rc genhtml_function_coverage=1 00:07:27.871 --rc genhtml_legend=1 00:07:27.871 --rc geninfo_all_blocks=1 00:07:27.871 --rc geninfo_unexecuted_blocks=1 00:07:27.871 00:07:27.871 ' 00:07:27.871 04:47:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.871 --rc genhtml_branch_coverage=1 00:07:27.871 --rc genhtml_function_coverage=1 00:07:27.871 --rc genhtml_legend=1 00:07:27.871 --rc geninfo_all_blocks=1 00:07:27.871 --rc geninfo_unexecuted_blocks=1 00:07:27.871 00:07:27.871 ' 00:07:27.871 04:47:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.871 --rc genhtml_branch_coverage=1 00:07:27.871 --rc genhtml_function_coverage=1 00:07:27.871 --rc genhtml_legend=1 00:07:27.871 --rc geninfo_all_blocks=1 00:07:27.871 --rc geninfo_unexecuted_blocks=1 00:07:27.871 00:07:27.871 ' 00:07:27.871 04:47:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.871 --rc genhtml_branch_coverage=1 00:07:27.871 --rc genhtml_function_coverage=1 00:07:27.871 --rc genhtml_legend=1 00:07:27.871 --rc geninfo_all_blocks=1 00:07:27.871 --rc geninfo_unexecuted_blocks=1 00:07:27.871 00:07:27.871 ' 00:07:27.871 04:47:51 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:27.871 04:47:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.871 04:47:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.871 04:47:51 -- common/autotest_common.sh@10 -- # set +x 00:07:27.871 ************************************ 00:07:27.871 START TEST env_memory 00:07:27.871 ************************************ 00:07:27.871 04:47:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:27.871 00:07:27.871 00:07:27.871 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.871 http://cunit.sourceforge.net/ 00:07:27.871 00:07:27.871 00:07:27.871 Suite: memory 00:07:27.871 Test: alloc and free memory map ...[2024-11-18 04:47:51.322473] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:27.871 passed 00:07:27.871 Test: mem map translation ...[2024-11-18 04:47:51.386888] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:27.871 [2024-11-18 04:47:51.386959] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:27.871 [2024-11-18 04:47:51.387074] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:27.871 [2024-11-18 04:47:51.387114] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:28.131 passed 00:07:28.131 Test: mem map registration ...[2024-11-18 04:47:51.487681] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:28.131 [2024-11-18 04:47:51.487785] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:28.131 passed 00:07:28.131 Test: mem map adjacent registrations ...passed 00:07:28.131 00:07:28.131 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.131 suites 1 1 n/a 0 0 00:07:28.131 tests 4 4 4 0 0 00:07:28.131 asserts 152 152 152 0 n/a 00:07:28.131 00:07:28.131 Elapsed time = 0.357 seconds 00:07:28.131 00:07:28.131 real 0m0.385s 00:07:28.131 user 0m0.361s 00:07:28.131 sys 0m0.025s 00:07:28.131 04:47:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.131 04:47:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.131 ************************************ 00:07:28.131 END TEST env_memory 00:07:28.131 ************************************ 00:07:28.390 04:47:51 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:28.390 04:47:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:28.390 04:47:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.390 04:47:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.390 ************************************ 00:07:28.390 START TEST env_vtophys 00:07:28.390 ************************************ 00:07:28.390 04:47:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:28.390 EAL: lib.eal log level changed from notice to debug 00:07:28.390 EAL: Detected lcore 0 as core 0 on socket 0 00:07:28.390 EAL: Detected lcore 1 as core 0 on socket 0 00:07:28.390 EAL: Detected lcore 2 as core 0 on socket 0 00:07:28.390 EAL: Detected lcore 3 as core 0 on socket 0 00:07:28.390 EAL: Detected lcore 4 as core 0 on socket 0 00:07:28.390 EAL: Detected lcore 5 as core 0 on socket 0 00:07:28.390 EAL: Detected lcore 6 as core 0 on socket 0 00:07:28.390 EAL: Detected lcore 7 as core 0 on socket 0 00:07:28.390 EAL: Detected lcore 8 as core 0 on socket 0 00:07:28.390 EAL: Detected lcore 9 as core 0 on socket 0 00:07:28.390 EAL: Maximum logical cores by configuration: 128 00:07:28.390 EAL: Detected CPU lcores: 10 00:07:28.390 EAL: Detected NUMA nodes: 1 00:07:28.390 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:28.390 EAL: Checking presence of .so 'librte_eal.so.24' 00:07:28.390 EAL: Checking presence of .so 'librte_eal.so' 00:07:28.390 EAL: Detected static linkage of DPDK 00:07:28.390 EAL: No shared files mode enabled, IPC will be disabled 00:07:28.390 EAL: Selected IOVA mode 'PA' 00:07:28.390 EAL: Probing VFIO support... 00:07:28.391 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:28.391 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:28.391 EAL: Ask a virtual area of 0x2e000 bytes 00:07:28.391 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:28.391 EAL: Setting up physically contiguous memory... 00:07:28.391 EAL: Setting maximum number of open files to 1048576 00:07:28.391 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:28.391 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:28.391 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.391 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:28.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:28.391 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.391 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:28.391 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:28.391 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.391 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:28.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:28.391 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.391 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:28.391 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:28.391 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.391 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:28.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:28.391 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.391 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:28.391 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:28.391 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.391 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:28.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:28.391 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.391 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:28.391 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:28.391 EAL: Hugepages will be freed exactly as allocated. 00:07:28.391 EAL: No shared files mode enabled, IPC is disabled 00:07:28.391 EAL: No shared files mode enabled, IPC is disabled 00:07:28.391 EAL: TSC frequency is ~2200000 KHz 00:07:28.391 EAL: Main lcore 0 is ready (tid=7da67f0d3a80;cpuset=[0]) 00:07:28.391 EAL: Trying to obtain current memory policy. 00:07:28.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.391 EAL: Restoring previous memory policy: 0 00:07:28.391 EAL: request: mp_malloc_sync 00:07:28.391 EAL: No shared files mode enabled, IPC is disabled 00:07:28.391 EAL: Heap on socket 0 was expanded by 2MB 00:07:28.391 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:28.391 EAL: Mem event callback 'spdk:(nil)' registered 00:07:28.391 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:28.649 00:07:28.649 00:07:28.649 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.649 http://cunit.sourceforge.net/ 00:07:28.649 00:07:28.649 00:07:28.649 Suite: components_suite 00:07:28.649 Test: vtophys_malloc_test ...passed 00:07:28.649 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:28.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.649 EAL: Restoring previous memory policy: 4 00:07:28.649 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.649 EAL: request: mp_malloc_sync 00:07:28.649 EAL: No shared files mode enabled, IPC is disabled 00:07:28.649 EAL: Heap on socket 0 was expanded by 4MB 00:07:28.649 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.649 EAL: request: mp_malloc_sync 00:07:28.649 EAL: No shared files mode enabled, IPC is disabled 00:07:28.649 EAL: Heap on socket 0 was shrunk by 4MB 00:07:28.649 EAL: Trying to obtain current memory policy. 00:07:28.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.649 EAL: Restoring previous memory policy: 4 00:07:28.649 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.649 EAL: request: mp_malloc_sync 00:07:28.649 EAL: No shared files mode enabled, IPC is disabled 00:07:28.649 EAL: Heap on socket 0 was expanded by 6MB 00:07:28.649 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.649 EAL: request: mp_malloc_sync 00:07:28.649 EAL: No shared files mode enabled, IPC is disabled 00:07:28.649 EAL: Heap on socket 0 was shrunk by 6MB 00:07:28.649 EAL: Trying to obtain current memory policy. 00:07:28.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.649 EAL: Restoring previous memory policy: 4 00:07:28.649 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.649 EAL: request: mp_malloc_sync 00:07:28.649 EAL: No shared files mode enabled, IPC is disabled 00:07:28.649 EAL: Heap on socket 0 was expanded by 10MB 00:07:28.649 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.649 EAL: request: mp_malloc_sync 00:07:28.649 EAL: No shared files mode enabled, IPC is disabled 00:07:28.649 EAL: Heap on socket 0 was shrunk by 10MB 00:07:28.650 EAL: Trying to obtain current memory policy. 00:07:28.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.650 EAL: Restoring previous memory policy: 4 00:07:28.650 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.650 EAL: request: mp_malloc_sync 00:07:28.650 EAL: No shared files mode enabled, IPC is disabled 00:07:28.650 EAL: Heap on socket 0 was expanded by 18MB 00:07:28.650 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.650 EAL: request: mp_malloc_sync 00:07:28.650 EAL: No shared files mode enabled, IPC is disabled 00:07:28.650 EAL: Heap on socket 0 was shrunk by 18MB 00:07:28.650 EAL: Trying to obtain current memory policy. 00:07:28.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.650 EAL: Restoring previous memory policy: 4 00:07:28.650 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.650 EAL: request: mp_malloc_sync 00:07:28.650 EAL: No shared files mode enabled, IPC is disabled 00:07:28.650 EAL: Heap on socket 0 was expanded by 34MB 00:07:28.650 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.650 EAL: request: mp_malloc_sync 00:07:28.650 EAL: No shared files mode enabled, IPC is disabled 00:07:28.650 EAL: Heap on socket 0 was shrunk by 34MB 00:07:28.908 EAL: Trying to obtain current memory policy. 00:07:28.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.908 EAL: Restoring previous memory policy: 4 00:07:28.908 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.908 EAL: request: mp_malloc_sync 00:07:28.908 EAL: No shared files mode enabled, IPC is disabled 00:07:28.908 EAL: Heap on socket 0 was expanded by 66MB 00:07:28.908 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.908 EAL: request: mp_malloc_sync 00:07:28.908 EAL: No shared files mode enabled, IPC is disabled 00:07:28.908 EAL: Heap on socket 0 was shrunk by 66MB 00:07:28.908 EAL: Trying to obtain current memory policy. 00:07:28.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.908 EAL: Restoring previous memory policy: 4 00:07:28.908 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.908 EAL: request: mp_malloc_sync 00:07:28.908 EAL: No shared files mode enabled, IPC is disabled 00:07:28.908 EAL: Heap on socket 0 was expanded by 130MB 00:07:29.168 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.168 EAL: request: mp_malloc_sync 00:07:29.168 EAL: No shared files mode enabled, IPC is disabled 00:07:29.168 EAL: Heap on socket 0 was shrunk by 130MB 00:07:29.427 EAL: Trying to obtain current memory policy. 00:07:29.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.427 EAL: Restoring previous memory policy: 4 00:07:29.427 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.427 EAL: request: mp_malloc_sync 00:07:29.427 EAL: No shared files mode enabled, IPC is disabled 00:07:29.427 EAL: Heap on socket 0 was expanded by 258MB 00:07:29.995 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.995 EAL: request: mp_malloc_sync 00:07:29.995 EAL: No shared files mode enabled, IPC is disabled 00:07:29.995 EAL: Heap on socket 0 was shrunk by 258MB 00:07:30.254 EAL: Trying to obtain current memory policy. 00:07:30.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:30.254 EAL: Restoring previous memory policy: 4 00:07:30.254 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.254 EAL: request: mp_malloc_sync 00:07:30.254 EAL: No shared files mode enabled, IPC is disabled 00:07:30.254 EAL: Heap on socket 0 was expanded by 514MB 00:07:31.191 EAL: Calling mem event callback 'spdk:(nil)' 00:07:31.191 EAL: request: mp_malloc_sync 00:07:31.191 EAL: No shared files mode enabled, IPC is disabled 00:07:31.191 EAL: Heap on socket 0 was shrunk by 514MB 00:07:31.759 EAL: Trying to obtain current memory policy. 00:07:31.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:31.759 EAL: Restoring previous memory policy: 4 00:07:31.759 EAL: Calling mem event callback 'spdk:(nil)' 00:07:31.759 EAL: request: mp_malloc_sync 00:07:31.759 EAL: No shared files mode enabled, IPC is disabled 00:07:31.759 EAL: Heap on socket 0 was expanded by 1026MB 00:07:33.662 EAL: Calling mem event callback 'spdk:(nil)' 00:07:33.662 EAL: request: mp_malloc_sync 00:07:33.662 EAL: No shared files mode enabled, IPC is disabled 00:07:33.662 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:35.039 passed 00:07:35.040 00:07:35.040 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.040 suites 1 1 n/a 0 0 00:07:35.040 tests 2 2 2 0 0 00:07:35.040 asserts 5425 5425 5425 0 n/a 00:07:35.040 00:07:35.040 Elapsed time = 6.314 seconds 00:07:35.040 EAL: Calling mem event callback 'spdk:(nil)' 00:07:35.040 EAL: request: mp_malloc_sync 00:07:35.040 EAL: No shared files mode enabled, IPC is disabled 00:07:35.040 EAL: Heap on socket 0 was shrunk by 2MB 00:07:35.040 EAL: No shared files mode enabled, IPC is disabled 00:07:35.040 EAL: No shared files mode enabled, IPC is disabled 00:07:35.040 EAL: No shared files mode enabled, IPC is disabled 00:07:35.040 00:07:35.040 real 0m6.589s 00:07:35.040 user 0m5.718s 00:07:35.040 sys 0m0.746s 00:07:35.040 04:47:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.040 ************************************ 00:07:35.040 END TEST env_vtophys 00:07:35.040 04:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.040 ************************************ 00:07:35.040 04:47:58 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:35.040 04:47:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.040 04:47:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.040 04:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.040 ************************************ 00:07:35.040 START TEST env_pci 00:07:35.040 ************************************ 00:07:35.040 04:47:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:35.040 00:07:35.040 00:07:35.040 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.040 http://cunit.sourceforge.net/ 00:07:35.040 00:07:35.040 00:07:35.040 Suite: pci 00:07:35.040 Test: pci_hook ...[2024-11-18 04:47:58.378557] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60346 has claimed it 00:07:35.040 passedEAL: Cannot find device (10000:00:01.0) 00:07:35.040 EAL: Failed to attach device on primary process 00:07:35.040 00:07:35.040 00:07:35.040 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.040 suites 1 1 n/a 0 0 00:07:35.040 tests 1 1 1 0 0 00:07:35.040 asserts 25 25 25 0 n/a 00:07:35.040 00:07:35.040 Elapsed time = 0.007 seconds 00:07:35.040 00:07:35.040 real 0m0.080s 00:07:35.040 user 0m0.040s 00:07:35.040 sys 0m0.041s 00:07:35.040 04:47:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.040 04:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.040 ************************************ 00:07:35.040 END TEST env_pci 00:07:35.040 ************************************ 00:07:35.040 04:47:58 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:35.040 04:47:58 -- env/env.sh@15 -- # uname 00:07:35.040 04:47:58 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:35.040 04:47:58 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:35.040 04:47:58 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:35.040 04:47:58 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:35.040 04:47:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.040 04:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.040 ************************************ 00:07:35.040 START TEST env_dpdk_post_init 00:07:35.040 ************************************ 00:07:35.040 04:47:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:35.040 EAL: Detected CPU lcores: 10 00:07:35.040 EAL: Detected NUMA nodes: 1 00:07:35.040 EAL: Detected static linkage of DPDK 00:07:35.040 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:35.299 EAL: Selected IOVA mode 'PA' 00:07:35.299 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:35.299 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:07:35.299 Starting DPDK initialization... 00:07:35.299 Starting SPDK post initialization... 00:07:35.299 SPDK NVMe probe 00:07:35.299 Attaching to 0000:00:06.0 00:07:35.299 Attached to 0000:00:06.0 00:07:35.299 Cleaning up... 00:07:35.299 00:07:35.299 real 0m0.255s 00:07:35.299 user 0m0.078s 00:07:35.299 sys 0m0.078s 00:07:35.299 04:47:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.299 04:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.299 ************************************ 00:07:35.299 END TEST env_dpdk_post_init 00:07:35.299 ************************************ 00:07:35.299 04:47:58 -- env/env.sh@26 -- # uname 00:07:35.299 04:47:58 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:35.299 04:47:58 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:35.299 04:47:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.299 04:47:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.299 04:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.299 ************************************ 00:07:35.299 START TEST env_mem_callbacks 00:07:35.299 ************************************ 00:07:35.299 04:47:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:35.558 EAL: Detected CPU lcores: 10 00:07:35.558 EAL: Detected NUMA nodes: 1 00:07:35.558 EAL: Detected static linkage of DPDK 00:07:35.558 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:35.558 EAL: Selected IOVA mode 'PA' 00:07:35.558 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:35.558 00:07:35.558 00:07:35.558 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.558 http://cunit.sourceforge.net/ 00:07:35.558 00:07:35.558 00:07:35.558 Suite: memory 00:07:35.558 Test: test ... 00:07:35.558 register 0x200000200000 2097152 00:07:35.558 malloc 3145728 00:07:35.558 register 0x200000400000 4194304 00:07:35.558 buf 0x2000004fffc0 len 3145728 PASSED 00:07:35.558 malloc 64 00:07:35.558 buf 0x2000004ffec0 len 64 PASSED 00:07:35.558 malloc 4194304 00:07:35.558 register 0x200000800000 6291456 00:07:35.558 buf 0x2000009fffc0 len 4194304 PASSED 00:07:35.558 free 0x2000004fffc0 3145728 00:07:35.558 free 0x2000004ffec0 64 00:07:35.558 unregister 0x200000400000 4194304 PASSED 00:07:35.558 free 0x2000009fffc0 4194304 00:07:35.558 unregister 0x200000800000 6291456 PASSED 00:07:35.558 malloc 8388608 00:07:35.558 register 0x200000400000 10485760 00:07:35.558 buf 0x2000005fffc0 len 8388608 PASSED 00:07:35.558 free 0x2000005fffc0 8388608 00:07:35.558 unregister 0x200000400000 10485760 PASSED 00:07:35.558 passed 00:07:35.558 00:07:35.558 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.558 suites 1 1 n/a 0 0 00:07:35.558 tests 1 1 1 0 0 00:07:35.558 asserts 15 15 15 0 n/a 00:07:35.558 00:07:35.558 Elapsed time = 0.058 seconds 00:07:35.558 00:07:35.558 real 0m0.263s 00:07:35.558 user 0m0.089s 00:07:35.558 sys 0m0.074s 00:07:35.558 04:47:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.558 04:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:35.558 ************************************ 00:07:35.558 END TEST env_mem_callbacks 00:07:35.558 ************************************ 00:07:35.817 00:07:35.817 real 0m8.030s 00:07:35.817 user 0m6.478s 00:07:35.817 sys 0m1.224s 00:07:35.817 04:47:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.817 04:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:35.817 ************************************ 00:07:35.817 END TEST env 00:07:35.817 ************************************ 00:07:35.817 04:47:59 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:35.817 04:47:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.817 04:47:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.817 04:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:35.817 ************************************ 00:07:35.817 START TEST rpc 00:07:35.817 ************************************ 00:07:35.817 04:47:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:35.817 * Looking for test storage... 00:07:35.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:35.817 04:47:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:35.817 04:47:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:35.817 04:47:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:35.817 04:47:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:35.817 04:47:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:35.817 04:47:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:35.817 04:47:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:35.817 04:47:59 -- scripts/common.sh@335 -- # IFS=.-: 00:07:35.817 04:47:59 -- scripts/common.sh@335 -- # read -ra ver1 00:07:35.817 04:47:59 -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.817 04:47:59 -- scripts/common.sh@336 -- # read -ra ver2 00:07:35.817 04:47:59 -- scripts/common.sh@337 -- # local 'op=<' 00:07:35.817 04:47:59 -- scripts/common.sh@339 -- # ver1_l=2 00:07:35.817 04:47:59 -- scripts/common.sh@340 -- # ver2_l=1 00:07:35.817 04:47:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:35.817 04:47:59 -- scripts/common.sh@343 -- # case "$op" in 00:07:35.817 04:47:59 -- scripts/common.sh@344 -- # : 1 00:07:35.817 04:47:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:35.817 04:47:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.817 04:47:59 -- scripts/common.sh@364 -- # decimal 1 00:07:35.817 04:47:59 -- scripts/common.sh@352 -- # local d=1 00:07:35.817 04:47:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.817 04:47:59 -- scripts/common.sh@354 -- # echo 1 00:07:35.817 04:47:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:35.817 04:47:59 -- scripts/common.sh@365 -- # decimal 2 00:07:35.817 04:47:59 -- scripts/common.sh@352 -- # local d=2 00:07:35.817 04:47:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.817 04:47:59 -- scripts/common.sh@354 -- # echo 2 00:07:35.817 04:47:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:35.817 04:47:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:35.817 04:47:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:35.817 04:47:59 -- scripts/common.sh@367 -- # return 0 00:07:35.817 04:47:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.817 04:47:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:35.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.817 --rc genhtml_branch_coverage=1 00:07:35.817 --rc genhtml_function_coverage=1 00:07:35.817 --rc genhtml_legend=1 00:07:35.817 --rc geninfo_all_blocks=1 00:07:35.817 --rc geninfo_unexecuted_blocks=1 00:07:35.817 00:07:35.817 ' 00:07:35.817 04:47:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:35.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.817 --rc genhtml_branch_coverage=1 00:07:35.817 --rc genhtml_function_coverage=1 00:07:35.817 --rc genhtml_legend=1 00:07:35.817 --rc geninfo_all_blocks=1 00:07:35.817 --rc geninfo_unexecuted_blocks=1 00:07:35.817 00:07:35.817 ' 00:07:35.817 04:47:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:35.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.817 --rc genhtml_branch_coverage=1 00:07:35.817 --rc genhtml_function_coverage=1 00:07:35.817 --rc genhtml_legend=1 00:07:35.817 --rc geninfo_all_blocks=1 00:07:35.817 --rc geninfo_unexecuted_blocks=1 00:07:35.817 00:07:35.817 ' 00:07:35.817 04:47:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:35.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.817 --rc genhtml_branch_coverage=1 00:07:35.817 --rc genhtml_function_coverage=1 00:07:35.817 --rc genhtml_legend=1 00:07:35.817 --rc geninfo_all_blocks=1 00:07:35.817 --rc geninfo_unexecuted_blocks=1 00:07:35.817 00:07:35.817 ' 00:07:35.817 04:47:59 -- rpc/rpc.sh@65 -- # spdk_pid=60472 00:07:35.817 04:47:59 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:35.817 04:47:59 -- rpc/rpc.sh@67 -- # waitforlisten 60472 00:07:35.818 04:47:59 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:35.818 04:47:59 -- common/autotest_common.sh@829 -- # '[' -z 60472 ']' 00:07:35.818 04:47:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.818 04:47:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.818 04:47:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.818 04:47:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.818 04:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:36.076 [2024-11-18 04:47:59.394624] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.076 [2024-11-18 04:47:59.394792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60472 ] 00:07:36.076 [2024-11-18 04:47:59.566216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.335 [2024-11-18 04:47:59.748830] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:36.335 [2024-11-18 04:47:59.749094] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:36.335 [2024-11-18 04:47:59.749118] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60472' to capture a snapshot of events at runtime. 00:07:36.335 [2024-11-18 04:47:59.749133] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60472 for offline analysis/debug. 00:07:36.335 [2024-11-18 04:47:59.749213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.713 04:48:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.713 04:48:01 -- common/autotest_common.sh@862 -- # return 0 00:07:37.713 04:48:01 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:37.713 04:48:01 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:37.713 04:48:01 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:37.713 04:48:01 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:37.713 04:48:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:37.713 04:48:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.713 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.713 ************************************ 00:07:37.713 START TEST rpc_integrity 00:07:37.713 ************************************ 00:07:37.713 04:48:01 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:07:37.713 04:48:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:37.713 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.713 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.713 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.713 04:48:01 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:37.713 04:48:01 -- rpc/rpc.sh@13 -- # jq length 00:07:37.713 04:48:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:37.713 04:48:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:37.713 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.713 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.713 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.713 04:48:01 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:37.713 04:48:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:37.713 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.713 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.713 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.713 04:48:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:37.713 { 00:07:37.713 "name": "Malloc0", 00:07:37.713 "aliases": [ 00:07:37.713 "45f2f2ff-3e09-4ae2-a696-3e69f57b7096" 00:07:37.713 ], 00:07:37.713 "product_name": "Malloc disk", 00:07:37.713 "block_size": 512, 00:07:37.713 "num_blocks": 16384, 00:07:37.713 "uuid": "45f2f2ff-3e09-4ae2-a696-3e69f57b7096", 00:07:37.713 "assigned_rate_limits": { 00:07:37.713 "rw_ios_per_sec": 0, 00:07:37.713 "rw_mbytes_per_sec": 0, 00:07:37.713 "r_mbytes_per_sec": 0, 00:07:37.713 "w_mbytes_per_sec": 0 00:07:37.713 }, 00:07:37.713 "claimed": false, 00:07:37.713 "zoned": false, 00:07:37.713 "supported_io_types": { 00:07:37.713 "read": true, 00:07:37.713 "write": true, 00:07:37.713 "unmap": true, 00:07:37.713 "write_zeroes": true, 00:07:37.713 "flush": true, 00:07:37.713 "reset": true, 00:07:37.713 "compare": false, 00:07:37.713 "compare_and_write": false, 00:07:37.713 "abort": true, 00:07:37.713 "nvme_admin": false, 00:07:37.713 "nvme_io": false 00:07:37.713 }, 00:07:37.713 "memory_domains": [ 00:07:37.713 { 00:07:37.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.713 "dma_device_type": 2 00:07:37.713 } 00:07:37.713 ], 00:07:37.713 "driver_specific": {} 00:07:37.713 } 00:07:37.713 ]' 00:07:37.713 04:48:01 -- rpc/rpc.sh@17 -- # jq length 00:07:37.713 04:48:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:37.713 04:48:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:37.713 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.713 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.713 [2024-11-18 04:48:01.152059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:37.713 [2024-11-18 04:48:01.152159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.713 [2024-11-18 04:48:01.152195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:07:37.713 [2024-11-18 04:48:01.152254] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.713 [2024-11-18 04:48:01.155240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.713 [2024-11-18 04:48:01.155282] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:37.713 Passthru0 00:07:37.713 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.713 04:48:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:37.713 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.713 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.713 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.713 04:48:01 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:37.713 { 00:07:37.713 "name": "Malloc0", 00:07:37.713 "aliases": [ 00:07:37.713 "45f2f2ff-3e09-4ae2-a696-3e69f57b7096" 00:07:37.713 ], 00:07:37.713 "product_name": "Malloc disk", 00:07:37.713 "block_size": 512, 00:07:37.713 "num_blocks": 16384, 00:07:37.713 "uuid": "45f2f2ff-3e09-4ae2-a696-3e69f57b7096", 00:07:37.713 "assigned_rate_limits": { 00:07:37.713 "rw_ios_per_sec": 0, 00:07:37.713 "rw_mbytes_per_sec": 0, 00:07:37.713 "r_mbytes_per_sec": 0, 00:07:37.713 "w_mbytes_per_sec": 0 00:07:37.713 }, 00:07:37.713 "claimed": true, 00:07:37.713 "claim_type": "exclusive_write", 00:07:37.713 "zoned": false, 00:07:37.713 "supported_io_types": { 00:07:37.713 "read": true, 00:07:37.713 "write": true, 00:07:37.713 "unmap": true, 00:07:37.713 "write_zeroes": true, 00:07:37.713 "flush": true, 00:07:37.713 "reset": true, 00:07:37.713 "compare": false, 00:07:37.713 "compare_and_write": false, 00:07:37.713 "abort": true, 00:07:37.713 "nvme_admin": false, 00:07:37.713 "nvme_io": false 00:07:37.713 }, 00:07:37.713 "memory_domains": [ 00:07:37.713 { 00:07:37.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.713 "dma_device_type": 2 00:07:37.713 } 00:07:37.713 ], 00:07:37.713 "driver_specific": {} 00:07:37.713 }, 00:07:37.713 { 00:07:37.713 "name": "Passthru0", 00:07:37.713 "aliases": [ 00:07:37.713 "2fcd8c73-69b7-5605-9fe4-58c878d2ece3" 00:07:37.713 ], 00:07:37.713 "product_name": "passthru", 00:07:37.713 "block_size": 512, 00:07:37.713 "num_blocks": 16384, 00:07:37.713 "uuid": "2fcd8c73-69b7-5605-9fe4-58c878d2ece3", 00:07:37.713 "assigned_rate_limits": { 00:07:37.713 "rw_ios_per_sec": 0, 00:07:37.713 "rw_mbytes_per_sec": 0, 00:07:37.713 "r_mbytes_per_sec": 0, 00:07:37.713 "w_mbytes_per_sec": 0 00:07:37.713 }, 00:07:37.713 "claimed": false, 00:07:37.713 "zoned": false, 00:07:37.713 "supported_io_types": { 00:07:37.713 "read": true, 00:07:37.713 "write": true, 00:07:37.713 "unmap": true, 00:07:37.713 "write_zeroes": true, 00:07:37.713 "flush": true, 00:07:37.713 "reset": true, 00:07:37.713 "compare": false, 00:07:37.713 "compare_and_write": false, 00:07:37.713 "abort": true, 00:07:37.713 "nvme_admin": false, 00:07:37.713 "nvme_io": false 00:07:37.713 }, 00:07:37.714 "memory_domains": [ 00:07:37.714 { 00:07:37.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.714 "dma_device_type": 2 00:07:37.714 } 00:07:37.714 ], 00:07:37.714 "driver_specific": { 00:07:37.714 "passthru": { 00:07:37.714 "name": "Passthru0", 00:07:37.714 "base_bdev_name": "Malloc0" 00:07:37.714 } 00:07:37.714 } 00:07:37.714 } 00:07:37.714 ]' 00:07:37.714 04:48:01 -- rpc/rpc.sh@21 -- # jq length 00:07:37.714 04:48:01 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:37.714 04:48:01 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:37.714 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 04:48:01 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:37.714 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 04:48:01 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:37.714 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.973 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.973 04:48:01 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:37.973 04:48:01 -- rpc/rpc.sh@26 -- # jq length 00:07:37.973 04:48:01 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:37.973 00:07:37.973 real 0m0.170s 00:07:37.973 user 0m0.043s 00:07:37.973 sys 0m0.037s 00:07:37.973 04:48:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.973 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.973 ************************************ 00:07:37.973 END TEST rpc_integrity 00:07:37.973 ************************************ 00:07:37.973 04:48:01 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:37.973 04:48:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:37.973 04:48:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.973 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.973 ************************************ 00:07:37.973 START TEST rpc_plugins 00:07:37.973 ************************************ 00:07:37.973 04:48:01 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:07:37.973 04:48:01 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:37.973 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.973 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.973 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.973 04:48:01 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:37.973 04:48:01 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:37.973 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.973 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.973 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.973 04:48:01 -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:37.973 { 00:07:37.973 "name": "Malloc1", 00:07:37.973 "aliases": [ 00:07:37.973 "e0bb8cfa-dd34-4357-b067-15c2e585a71b" 00:07:37.973 ], 00:07:37.973 "product_name": "Malloc disk", 00:07:37.973 "block_size": 4096, 00:07:37.973 "num_blocks": 256, 00:07:37.973 "uuid": "e0bb8cfa-dd34-4357-b067-15c2e585a71b", 00:07:37.973 "assigned_rate_limits": { 00:07:37.973 "rw_ios_per_sec": 0, 00:07:37.973 "rw_mbytes_per_sec": 0, 00:07:37.973 "r_mbytes_per_sec": 0, 00:07:37.973 "w_mbytes_per_sec": 0 00:07:37.973 }, 00:07:37.973 "claimed": false, 00:07:37.973 "zoned": false, 00:07:37.973 "supported_io_types": { 00:07:37.973 "read": true, 00:07:37.973 "write": true, 00:07:37.973 "unmap": true, 00:07:37.973 "write_zeroes": true, 00:07:37.973 "flush": true, 00:07:37.973 "reset": true, 00:07:37.973 "compare": false, 00:07:37.973 "compare_and_write": false, 00:07:37.973 "abort": true, 00:07:37.973 "nvme_admin": false, 00:07:37.973 "nvme_io": false 00:07:37.973 }, 00:07:37.973 "memory_domains": [ 00:07:37.973 { 00:07:37.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.973 "dma_device_type": 2 00:07:37.973 } 00:07:37.973 ], 00:07:37.973 "driver_specific": {} 00:07:37.973 } 00:07:37.973 ]' 00:07:37.973 04:48:01 -- rpc/rpc.sh@32 -- # jq length 00:07:37.973 04:48:01 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:37.973 04:48:01 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:37.973 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.973 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.973 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.973 04:48:01 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:37.973 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.973 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.973 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.973 04:48:01 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:37.973 04:48:01 -- rpc/rpc.sh@36 -- # jq length 00:07:37.973 04:48:01 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:37.973 00:07:37.973 real 0m0.075s 00:07:37.973 user 0m0.015s 00:07:37.973 sys 0m0.026s 00:07:37.973 04:48:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.973 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.973 ************************************ 00:07:37.973 END TEST rpc_plugins 00:07:37.973 ************************************ 00:07:37.973 04:48:01 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:37.973 04:48:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:37.973 04:48:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.973 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.973 ************************************ 00:07:37.973 START TEST rpc_trace_cmd_test 00:07:37.973 ************************************ 00:07:37.973 04:48:01 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:07:37.973 04:48:01 -- rpc/rpc.sh@40 -- # local info 00:07:37.973 04:48:01 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:37.973 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.973 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:37.973 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.973 04:48:01 -- rpc/rpc.sh@42 -- # info='{ 00:07:37.973 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60472", 00:07:37.973 "tpoint_group_mask": "0x8", 00:07:37.973 "iscsi_conn": { 00:07:37.973 "mask": "0x2", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "scsi": { 00:07:37.973 "mask": "0x4", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "bdev": { 00:07:37.973 "mask": "0x8", 00:07:37.973 "tpoint_mask": "0xffffffffffffffff" 00:07:37.973 }, 00:07:37.973 "nvmf_rdma": { 00:07:37.973 "mask": "0x10", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "nvmf_tcp": { 00:07:37.973 "mask": "0x20", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "ftl": { 00:07:37.973 "mask": "0x40", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "blobfs": { 00:07:37.973 "mask": "0x80", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "dsa": { 00:07:37.973 "mask": "0x200", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "thread": { 00:07:37.973 "mask": "0x400", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "nvme_pcie": { 00:07:37.973 "mask": "0x800", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "iaa": { 00:07:37.973 "mask": "0x1000", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "nvme_tcp": { 00:07:37.973 "mask": "0x2000", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 }, 00:07:37.973 "bdev_nvme": { 00:07:37.973 "mask": "0x4000", 00:07:37.973 "tpoint_mask": "0x0" 00:07:37.973 } 00:07:37.973 }' 00:07:37.973 04:48:01 -- rpc/rpc.sh@43 -- # jq length 00:07:37.973 04:48:01 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:07:37.973 04:48:01 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:37.973 04:48:01 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:37.973 04:48:01 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:37.973 04:48:01 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:37.973 04:48:01 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:37.973 04:48:01 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:37.973 04:48:01 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:38.261 04:48:01 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:38.261 00:07:38.261 real 0m0.068s 00:07:38.261 user 0m0.027s 00:07:38.261 sys 0m0.035s 00:07:38.261 ************************************ 00:07:38.261 END TEST rpc_trace_cmd_test 00:07:38.261 ************************************ 00:07:38.261 04:48:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.261 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.261 04:48:01 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:38.261 04:48:01 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:38.261 04:48:01 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:38.261 04:48:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.261 04:48:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.261 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.261 ************************************ 00:07:38.261 START TEST rpc_daemon_integrity 00:07:38.261 ************************************ 00:07:38.261 04:48:01 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:07:38.261 04:48:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:38.261 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.261 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.261 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.261 04:48:01 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:38.261 04:48:01 -- rpc/rpc.sh@13 -- # jq length 00:07:38.261 04:48:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:38.261 04:48:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:38.261 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.261 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.261 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.261 04:48:01 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:38.261 04:48:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:38.261 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.261 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.261 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.261 04:48:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:38.261 { 00:07:38.261 "name": "Malloc2", 00:07:38.261 "aliases": [ 00:07:38.261 "0d405f89-f07b-41d3-8831-9523e7c63fc4" 00:07:38.261 ], 00:07:38.261 "product_name": "Malloc disk", 00:07:38.261 "block_size": 512, 00:07:38.261 "num_blocks": 16384, 00:07:38.261 "uuid": "0d405f89-f07b-41d3-8831-9523e7c63fc4", 00:07:38.261 "assigned_rate_limits": { 00:07:38.261 "rw_ios_per_sec": 0, 00:07:38.261 "rw_mbytes_per_sec": 0, 00:07:38.261 "r_mbytes_per_sec": 0, 00:07:38.261 "w_mbytes_per_sec": 0 00:07:38.261 }, 00:07:38.261 "claimed": false, 00:07:38.261 "zoned": false, 00:07:38.261 "supported_io_types": { 00:07:38.261 "read": true, 00:07:38.261 "write": true, 00:07:38.261 "unmap": true, 00:07:38.261 "write_zeroes": true, 00:07:38.261 "flush": true, 00:07:38.261 "reset": true, 00:07:38.261 "compare": false, 00:07:38.261 "compare_and_write": false, 00:07:38.261 "abort": true, 00:07:38.261 "nvme_admin": false, 00:07:38.261 "nvme_io": false 00:07:38.261 }, 00:07:38.261 "memory_domains": [ 00:07:38.261 { 00:07:38.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.261 "dma_device_type": 2 00:07:38.261 } 00:07:38.261 ], 00:07:38.261 "driver_specific": {} 00:07:38.261 } 00:07:38.261 ]' 00:07:38.261 04:48:01 -- rpc/rpc.sh@17 -- # jq length 00:07:38.261 04:48:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:38.261 04:48:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:38.261 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.261 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.261 [2024-11-18 04:48:01.619429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:38.261 [2024-11-18 04:48:01.619501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.261 [2024-11-18 04:48:01.619531] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:07:38.261 [2024-11-18 04:48:01.619548] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.261 [2024-11-18 04:48:01.622519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.261 [2024-11-18 04:48:01.622582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:38.261 Passthru0 00:07:38.261 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.261 04:48:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:38.261 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.261 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.261 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.261 04:48:01 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:38.261 { 00:07:38.261 "name": "Malloc2", 00:07:38.261 "aliases": [ 00:07:38.261 "0d405f89-f07b-41d3-8831-9523e7c63fc4" 00:07:38.261 ], 00:07:38.261 "product_name": "Malloc disk", 00:07:38.261 "block_size": 512, 00:07:38.261 "num_blocks": 16384, 00:07:38.261 "uuid": "0d405f89-f07b-41d3-8831-9523e7c63fc4", 00:07:38.261 "assigned_rate_limits": { 00:07:38.261 "rw_ios_per_sec": 0, 00:07:38.261 "rw_mbytes_per_sec": 0, 00:07:38.261 "r_mbytes_per_sec": 0, 00:07:38.261 "w_mbytes_per_sec": 0 00:07:38.261 }, 00:07:38.261 "claimed": true, 00:07:38.261 "claim_type": "exclusive_write", 00:07:38.262 "zoned": false, 00:07:38.262 "supported_io_types": { 00:07:38.262 "read": true, 00:07:38.262 "write": true, 00:07:38.262 "unmap": true, 00:07:38.262 "write_zeroes": true, 00:07:38.262 "flush": true, 00:07:38.262 "reset": true, 00:07:38.262 "compare": false, 00:07:38.262 "compare_and_write": false, 00:07:38.262 "abort": true, 00:07:38.262 "nvme_admin": false, 00:07:38.262 "nvme_io": false 00:07:38.262 }, 00:07:38.262 "memory_domains": [ 00:07:38.262 { 00:07:38.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.262 "dma_device_type": 2 00:07:38.262 } 00:07:38.262 ], 00:07:38.262 "driver_specific": {} 00:07:38.262 }, 00:07:38.262 { 00:07:38.262 "name": "Passthru0", 00:07:38.262 "aliases": [ 00:07:38.262 "3d6434bf-3ab1-577f-bcbd-29254eb77e64" 00:07:38.262 ], 00:07:38.262 "product_name": "passthru", 00:07:38.262 "block_size": 512, 00:07:38.262 "num_blocks": 16384, 00:07:38.262 "uuid": "3d6434bf-3ab1-577f-bcbd-29254eb77e64", 00:07:38.262 "assigned_rate_limits": { 00:07:38.262 "rw_ios_per_sec": 0, 00:07:38.262 "rw_mbytes_per_sec": 0, 00:07:38.262 "r_mbytes_per_sec": 0, 00:07:38.262 "w_mbytes_per_sec": 0 00:07:38.262 }, 00:07:38.262 "claimed": false, 00:07:38.262 "zoned": false, 00:07:38.262 "supported_io_types": { 00:07:38.262 "read": true, 00:07:38.262 "write": true, 00:07:38.262 "unmap": true, 00:07:38.262 "write_zeroes": true, 00:07:38.262 "flush": true, 00:07:38.262 "reset": true, 00:07:38.262 "compare": false, 00:07:38.262 "compare_and_write": false, 00:07:38.262 "abort": true, 00:07:38.262 "nvme_admin": false, 00:07:38.262 "nvme_io": false 00:07:38.262 }, 00:07:38.262 "memory_domains": [ 00:07:38.262 { 00:07:38.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.262 "dma_device_type": 2 00:07:38.262 } 00:07:38.262 ], 00:07:38.262 "driver_specific": { 00:07:38.262 "passthru": { 00:07:38.262 "name": "Passthru0", 00:07:38.262 "base_bdev_name": "Malloc2" 00:07:38.262 } 00:07:38.262 } 00:07:38.262 } 00:07:38.262 ]' 00:07:38.262 04:48:01 -- rpc/rpc.sh@21 -- # jq length 00:07:38.262 04:48:01 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:38.262 04:48:01 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:38.262 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.262 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.262 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.262 04:48:01 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:38.262 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.262 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.262 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.262 04:48:01 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:38.262 04:48:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.262 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.262 04:48:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.262 04:48:01 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:38.262 04:48:01 -- rpc/rpc.sh@26 -- # jq length 00:07:38.262 04:48:01 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:38.262 00:07:38.262 real 0m0.168s 00:07:38.262 user 0m0.045s 00:07:38.262 sys 0m0.037s 00:07:38.262 04:48:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.262 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.262 ************************************ 00:07:38.262 END TEST rpc_daemon_integrity 00:07:38.262 ************************************ 00:07:38.262 04:48:01 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:38.262 04:48:01 -- rpc/rpc.sh@84 -- # killprocess 60472 00:07:38.262 04:48:01 -- common/autotest_common.sh@936 -- # '[' -z 60472 ']' 00:07:38.262 04:48:01 -- common/autotest_common.sh@940 -- # kill -0 60472 00:07:38.262 04:48:01 -- common/autotest_common.sh@941 -- # uname 00:07:38.262 04:48:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:38.262 04:48:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60472 00:07:38.527 04:48:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:38.527 04:48:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:38.527 killing process with pid 60472 00:07:38.527 04:48:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60472' 00:07:38.527 04:48:01 -- common/autotest_common.sh@955 -- # kill 60472 00:07:38.527 04:48:01 -- common/autotest_common.sh@960 -- # wait 60472 00:07:40.434 00:07:40.434 real 0m4.709s 00:07:40.434 user 0m4.995s 00:07:40.434 sys 0m0.849s 00:07:40.434 04:48:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.434 04:48:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.434 ************************************ 00:07:40.434 END TEST rpc 00:07:40.434 ************************************ 00:07:40.434 04:48:03 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:40.434 04:48:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.434 04:48:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.434 04:48:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.434 ************************************ 00:07:40.434 START TEST rpc_client 00:07:40.434 ************************************ 00:07:40.434 04:48:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:40.693 * Looking for test storage... 00:07:40.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:40.693 04:48:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:40.693 04:48:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:40.693 04:48:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:40.693 04:48:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:40.693 04:48:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:40.693 04:48:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:40.693 04:48:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:40.693 04:48:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:40.693 04:48:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:40.693 04:48:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.693 04:48:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:40.693 04:48:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:40.693 04:48:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:40.693 04:48:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:40.693 04:48:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:40.693 04:48:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:40.693 04:48:04 -- scripts/common.sh@344 -- # : 1 00:07:40.693 04:48:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:40.693 04:48:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.693 04:48:04 -- scripts/common.sh@364 -- # decimal 1 00:07:40.693 04:48:04 -- scripts/common.sh@352 -- # local d=1 00:07:40.693 04:48:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.693 04:48:04 -- scripts/common.sh@354 -- # echo 1 00:07:40.693 04:48:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:40.693 04:48:04 -- scripts/common.sh@365 -- # decimal 2 00:07:40.693 04:48:04 -- scripts/common.sh@352 -- # local d=2 00:07:40.693 04:48:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.693 04:48:04 -- scripts/common.sh@354 -- # echo 2 00:07:40.693 04:48:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:40.693 04:48:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:40.693 04:48:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:40.693 04:48:04 -- scripts/common.sh@367 -- # return 0 00:07:40.693 04:48:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.693 04:48:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.693 --rc genhtml_branch_coverage=1 00:07:40.693 --rc genhtml_function_coverage=1 00:07:40.693 --rc genhtml_legend=1 00:07:40.693 --rc geninfo_all_blocks=1 00:07:40.693 --rc geninfo_unexecuted_blocks=1 00:07:40.693 00:07:40.693 ' 00:07:40.694 04:48:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.694 --rc genhtml_branch_coverage=1 00:07:40.694 --rc genhtml_function_coverage=1 00:07:40.694 --rc genhtml_legend=1 00:07:40.694 --rc geninfo_all_blocks=1 00:07:40.694 --rc geninfo_unexecuted_blocks=1 00:07:40.694 00:07:40.694 ' 00:07:40.694 04:48:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.694 --rc genhtml_branch_coverage=1 00:07:40.694 --rc genhtml_function_coverage=1 00:07:40.694 --rc genhtml_legend=1 00:07:40.694 --rc geninfo_all_blocks=1 00:07:40.694 --rc geninfo_unexecuted_blocks=1 00:07:40.694 00:07:40.694 ' 00:07:40.694 04:48:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.694 --rc genhtml_branch_coverage=1 00:07:40.694 --rc genhtml_function_coverage=1 00:07:40.694 --rc genhtml_legend=1 00:07:40.694 --rc geninfo_all_blocks=1 00:07:40.694 --rc geninfo_unexecuted_blocks=1 00:07:40.694 00:07:40.694 ' 00:07:40.694 04:48:04 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:40.694 OK 00:07:40.694 04:48:04 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:40.694 00:07:40.694 real 0m0.225s 00:07:40.694 user 0m0.126s 00:07:40.694 sys 0m0.115s 00:07:40.694 04:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.694 ************************************ 00:07:40.694 END TEST rpc_client 00:07:40.694 ************************************ 00:07:40.694 04:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:40.694 04:48:04 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:40.694 04:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.694 04:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.694 04:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:40.694 ************************************ 00:07:40.694 START TEST json_config 00:07:40.694 ************************************ 00:07:40.694 04:48:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:40.953 04:48:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:40.953 04:48:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:40.953 04:48:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:40.953 04:48:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:40.953 04:48:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:40.953 04:48:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:40.953 04:48:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:40.954 04:48:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:40.954 04:48:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:40.954 04:48:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.954 04:48:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:40.954 04:48:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:40.954 04:48:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:40.954 04:48:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:40.954 04:48:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:40.954 04:48:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:40.954 04:48:04 -- scripts/common.sh@344 -- # : 1 00:07:40.954 04:48:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:40.954 04:48:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.954 04:48:04 -- scripts/common.sh@364 -- # decimal 1 00:07:40.954 04:48:04 -- scripts/common.sh@352 -- # local d=1 00:07:40.954 04:48:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.954 04:48:04 -- scripts/common.sh@354 -- # echo 1 00:07:40.954 04:48:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:40.954 04:48:04 -- scripts/common.sh@365 -- # decimal 2 00:07:40.954 04:48:04 -- scripts/common.sh@352 -- # local d=2 00:07:40.954 04:48:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.954 04:48:04 -- scripts/common.sh@354 -- # echo 2 00:07:40.954 04:48:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:40.954 04:48:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:40.954 04:48:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:40.954 04:48:04 -- scripts/common.sh@367 -- # return 0 00:07:40.954 04:48:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.954 04:48:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:40.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.954 --rc genhtml_branch_coverage=1 00:07:40.954 --rc genhtml_function_coverage=1 00:07:40.954 --rc genhtml_legend=1 00:07:40.954 --rc geninfo_all_blocks=1 00:07:40.954 --rc geninfo_unexecuted_blocks=1 00:07:40.954 00:07:40.954 ' 00:07:40.954 04:48:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:40.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.954 --rc genhtml_branch_coverage=1 00:07:40.954 --rc genhtml_function_coverage=1 00:07:40.954 --rc genhtml_legend=1 00:07:40.954 --rc geninfo_all_blocks=1 00:07:40.954 --rc geninfo_unexecuted_blocks=1 00:07:40.954 00:07:40.954 ' 00:07:40.954 04:48:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:40.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.954 --rc genhtml_branch_coverage=1 00:07:40.954 --rc genhtml_function_coverage=1 00:07:40.954 --rc genhtml_legend=1 00:07:40.954 --rc geninfo_all_blocks=1 00:07:40.954 --rc geninfo_unexecuted_blocks=1 00:07:40.954 00:07:40.954 ' 00:07:40.954 04:48:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:40.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.954 --rc genhtml_branch_coverage=1 00:07:40.954 --rc genhtml_function_coverage=1 00:07:40.954 --rc genhtml_legend=1 00:07:40.954 --rc geninfo_all_blocks=1 00:07:40.954 --rc geninfo_unexecuted_blocks=1 00:07:40.954 00:07:40.954 ' 00:07:40.954 04:48:04 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:40.954 04:48:04 -- nvmf/common.sh@7 -- # uname -s 00:07:40.954 04:48:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.954 04:48:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.954 04:48:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.954 04:48:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.954 04:48:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.954 04:48:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.954 04:48:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.954 04:48:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.954 04:48:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.954 04:48:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.954 04:48:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7e74b746-ded7-4dde-a22d-3af59a1bbf22 00:07:40.954 04:48:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=7e74b746-ded7-4dde-a22d-3af59a1bbf22 00:07:40.954 04:48:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.954 04:48:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.954 04:48:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:40.954 04:48:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.954 04:48:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.954 04:48:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.954 04:48:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.954 04:48:04 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:40.954 04:48:04 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:40.954 04:48:04 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:40.954 04:48:04 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:40.954 04:48:04 -- paths/export.sh@6 -- # export PATH 00:07:40.954 04:48:04 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:40.954 04:48:04 -- nvmf/common.sh@46 -- # : 0 00:07:40.954 04:48:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:40.954 04:48:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:40.954 04:48:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:40.954 04:48:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.954 04:48:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.954 04:48:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:40.954 04:48:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:40.954 04:48:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:40.954 04:48:04 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:07:40.954 04:48:04 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:07:40.954 04:48:04 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:07:40.954 04:48:04 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:40.954 04:48:04 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:07:40.954 04:48:04 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:07:40.954 04:48:04 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:40.954 04:48:04 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:07:40.954 04:48:04 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:40.954 04:48:04 -- json_config/json_config.sh@32 -- # declare -A app_params 00:07:40.954 04:48:04 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:40.954 04:48:04 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:07:40.954 04:48:04 -- json_config/json_config.sh@43 -- # last_event_id=0 00:07:40.954 04:48:04 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:40.954 INFO: JSON configuration test init 00:07:40.954 04:48:04 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:07:40.954 04:48:04 -- json_config/json_config.sh@420 -- # json_config_test_init 00:07:40.954 04:48:04 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:07:40.954 04:48:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.954 04:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:40.954 04:48:04 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:07:40.954 04:48:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.954 04:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:40.954 04:48:04 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:07:40.954 04:48:04 -- json_config/json_config.sh@98 -- # local app=target 00:07:40.954 04:48:04 -- json_config/json_config.sh@99 -- # shift 00:07:40.954 04:48:04 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:40.954 04:48:04 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:40.954 04:48:04 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:40.954 04:48:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:40.954 04:48:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:40.954 04:48:04 -- json_config/json_config.sh@111 -- # app_pid[$app]=60751 00:07:40.954 Waiting for target to run... 00:07:40.954 04:48:04 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:40.954 04:48:04 -- json_config/json_config.sh@114 -- # waitforlisten 60751 /var/tmp/spdk_tgt.sock 00:07:40.955 04:48:04 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:40.955 04:48:04 -- common/autotest_common.sh@829 -- # '[' -z 60751 ']' 00:07:40.955 04:48:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:40.955 04:48:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:40.955 04:48:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:40.955 04:48:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.955 04:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:40.955 [2024-11-18 04:48:04.430694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.955 [2024-11-18 04:48:04.430865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60751 ] 00:07:41.524 [2024-11-18 04:48:04.772635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.524 [2024-11-18 04:48:04.938470] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:41.524 [2024-11-18 04:48:04.938731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.092 04:48:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.092 04:48:05 -- common/autotest_common.sh@862 -- # return 0 00:07:42.092 00:07:42.092 04:48:05 -- json_config/json_config.sh@115 -- # echo '' 00:07:42.092 04:48:05 -- json_config/json_config.sh@322 -- # create_accel_config 00:07:42.092 04:48:05 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:07:42.092 04:48:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.092 04:48:05 -- common/autotest_common.sh@10 -- # set +x 00:07:42.092 04:48:05 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:07:42.092 04:48:05 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:07:42.092 04:48:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:42.092 04:48:05 -- common/autotest_common.sh@10 -- # set +x 00:07:42.092 04:48:05 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:42.092 04:48:05 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:07:42.092 04:48:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:43.030 04:48:06 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:07:43.030 04:48:06 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:07:43.030 04:48:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.030 04:48:06 -- common/autotest_common.sh@10 -- # set +x 00:07:43.030 04:48:06 -- json_config/json_config.sh@48 -- # local ret=0 00:07:43.030 04:48:06 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:43.030 04:48:06 -- json_config/json_config.sh@49 -- # local enabled_types 00:07:43.030 04:48:06 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:43.030 04:48:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:43.030 04:48:06 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:43.030 04:48:06 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:43.030 04:48:06 -- json_config/json_config.sh@51 -- # local get_types 00:07:43.030 04:48:06 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:43.030 04:48:06 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:07:43.030 04:48:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.030 04:48:06 -- common/autotest_common.sh@10 -- # set +x 00:07:43.289 04:48:06 -- json_config/json_config.sh@58 -- # return 0 00:07:43.289 04:48:06 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:07:43.289 04:48:06 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:07:43.289 04:48:06 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:07:43.289 04:48:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.289 04:48:06 -- common/autotest_common.sh@10 -- # set +x 00:07:43.289 04:48:06 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:07:43.289 04:48:06 -- json_config/json_config.sh@160 -- # local expected_notifications 00:07:43.289 04:48:06 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:07:43.289 04:48:06 -- json_config/json_config.sh@164 -- # get_notifications 00:07:43.289 04:48:06 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:43.289 04:48:06 -- json_config/json_config.sh@64 -- # IFS=: 00:07:43.289 04:48:06 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:43.289 04:48:06 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:43.289 04:48:06 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:43.289 04:48:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:43.549 04:48:06 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:43.549 04:48:06 -- json_config/json_config.sh@64 -- # IFS=: 00:07:43.549 04:48:06 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:43.549 04:48:06 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:07:43.549 04:48:06 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:07:43.549 04:48:06 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:43.549 04:48:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:43.549 Nvme0n1p0 Nvme0n1p1 00:07:43.549 04:48:07 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:43.549 04:48:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:43.808 [2024-11-18 04:48:07.302079] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:43.808 [2024-11-18 04:48:07.302234] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:43.808 00:07:43.808 04:48:07 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:43.808 04:48:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:44.067 Malloc3 00:07:44.067 04:48:07 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:44.067 04:48:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:44.325 [2024-11-18 04:48:07.735362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:44.325 [2024-11-18 04:48:07.735461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.325 [2024-11-18 04:48:07.735495] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:07:44.325 [2024-11-18 04:48:07.735513] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.325 [2024-11-18 04:48:07.738009] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.325 [2024-11-18 04:48:07.738056] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:44.325 PTBdevFromMalloc3 00:07:44.325 04:48:07 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:44.325 04:48:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:44.584 Null0 00:07:44.584 04:48:07 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:44.584 04:48:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:44.843 Malloc0 00:07:44.843 04:48:08 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:44.843 04:48:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:45.102 Malloc1 00:07:45.102 04:48:08 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:45.102 04:48:08 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:45.361 102400+0 records in 00:07:45.361 102400+0 records out 00:07:45.361 104857600 bytes (105 MB, 100 MiB) copied, 0.233575 s, 449 MB/s 00:07:45.361 04:48:08 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:45.361 04:48:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:45.620 aio_disk 00:07:45.620 04:48:08 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:45.620 04:48:08 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:45.620 04:48:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:45.620 fe5594ab-bf67-4ffe-8e61-7d0ff1af7c3b 00:07:45.878 04:48:09 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:45.878 04:48:09 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:45.878 04:48:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:45.878 04:48:09 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:45.878 04:48:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:46.136 04:48:09 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:46.137 04:48:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:46.395 04:48:09 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:46.395 04:48:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:46.654 04:48:09 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:07:46.654 04:48:09 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:07:46.654 04:48:09 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:9ac83395-caa6-4052-9c44-b610d7a309e2 bdev_register:578e178d-5662-4cb8-8f25-3b907b93a8d7 bdev_register:140dd7a6-677d-48ff-9247-1cb87bac76eb bdev_register:43637297-92aa-4c90-a1f5-e05258207ddf 00:07:46.654 04:48:09 -- json_config/json_config.sh@70 -- # local events_to_check 00:07:46.654 04:48:09 -- json_config/json_config.sh@71 -- # local recorded_events 00:07:46.654 04:48:09 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:46.654 04:48:09 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:9ac83395-caa6-4052-9c44-b610d7a309e2 bdev_register:578e178d-5662-4cb8-8f25-3b907b93a8d7 bdev_register:140dd7a6-677d-48ff-9247-1cb87bac76eb bdev_register:43637297-92aa-4c90-a1f5-e05258207ddf 00:07:46.654 04:48:09 -- json_config/json_config.sh@74 -- # sort 00:07:46.654 04:48:09 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:07:46.654 04:48:09 -- json_config/json_config.sh@75 -- # get_notifications 00:07:46.654 04:48:09 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:46.654 04:48:09 -- json_config/json_config.sh@75 -- # sort 00:07:46.654 04:48:09 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.654 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.654 04:48:10 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:46.654 04:48:10 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:46.654 04:48:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.913 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.913 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.914 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:9ac83395-caa6-4052-9c44-b610d7a309e2 00:07:46.914 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.914 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.914 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:578e178d-5662-4cb8-8f25-3b907b93a8d7 00:07:46.914 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.914 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.914 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:140dd7a6-677d-48ff-9247-1cb87bac76eb 00:07:46.914 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.914 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.914 04:48:10 -- json_config/json_config.sh@65 -- # echo bdev_register:43637297-92aa-4c90-a1f5-e05258207ddf 00:07:46.914 04:48:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.914 04:48:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.914 04:48:10 -- json_config/json_config.sh@77 -- # [[ bdev_register:140dd7a6-677d-48ff-9247-1cb87bac76eb bdev_register:43637297-92aa-4c90-a1f5-e05258207ddf bdev_register:578e178d-5662-4cb8-8f25-3b907b93a8d7 bdev_register:9ac83395-caa6-4052-9c44-b610d7a309e2 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\4\0\d\d\7\a\6\-\6\7\7\d\-\4\8\f\f\-\9\2\4\7\-\1\c\b\8\7\b\a\c\7\6\e\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\3\6\3\7\2\9\7\-\9\2\a\a\-\4\c\9\0\-\a\1\f\5\-\e\0\5\2\5\8\2\0\7\d\d\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\7\8\e\1\7\8\d\-\5\6\6\2\-\4\c\b\8\-\8\f\2\5\-\3\b\9\0\7\b\9\3\a\8\d\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\a\c\8\3\3\9\5\-\c\a\a\6\-\4\0\5\2\-\9\c\4\4\-\b\6\1\0\d\7\a\3\0\9\e\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:07:46.914 04:48:10 -- json_config/json_config.sh@89 -- # cat 00:07:46.914 04:48:10 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:140dd7a6-677d-48ff-9247-1cb87bac76eb bdev_register:43637297-92aa-4c90-a1f5-e05258207ddf bdev_register:578e178d-5662-4cb8-8f25-3b907b93a8d7 bdev_register:9ac83395-caa6-4052-9c44-b610d7a309e2 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:07:46.914 Expected events matched: 00:07:46.914 bdev_register:140dd7a6-677d-48ff-9247-1cb87bac76eb 00:07:46.914 bdev_register:43637297-92aa-4c90-a1f5-e05258207ddf 00:07:46.914 bdev_register:578e178d-5662-4cb8-8f25-3b907b93a8d7 00:07:46.914 bdev_register:9ac83395-caa6-4052-9c44-b610d7a309e2 00:07:46.914 bdev_register:Malloc0 00:07:46.914 bdev_register:Malloc0p0 00:07:46.914 bdev_register:Malloc0p1 00:07:46.914 bdev_register:Malloc0p2 00:07:46.914 bdev_register:Malloc1 00:07:46.914 bdev_register:Malloc3 00:07:46.914 bdev_register:Null0 00:07:46.914 bdev_register:Nvme0n1 00:07:46.914 bdev_register:Nvme0n1p0 00:07:46.914 bdev_register:Nvme0n1p1 00:07:46.914 bdev_register:PTBdevFromMalloc3 00:07:46.914 bdev_register:aio_disk 00:07:46.914 04:48:10 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:07:46.914 04:48:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.914 04:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:46.914 04:48:10 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:07:46.914 04:48:10 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:07:46.914 04:48:10 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:07:46.914 04:48:10 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:07:46.914 04:48:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.914 04:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:46.914 04:48:10 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:07:46.914 04:48:10 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:46.914 04:48:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:47.173 MallocBdevForConfigChangeCheck 00:07:47.174 04:48:10 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:07:47.174 04:48:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.174 04:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:47.174 04:48:10 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:07:47.174 04:48:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:47.742 INFO: shutting down applications... 00:07:47.742 04:48:10 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:07:47.742 04:48:10 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:07:47.742 04:48:10 -- json_config/json_config.sh@431 -- # json_config_clear target 00:07:47.742 04:48:10 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:07:47.742 04:48:10 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:47.742 [2024-11-18 04:48:11.159923] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:48.001 Calling clear_vhost_scsi_subsystem 00:07:48.001 Calling clear_iscsi_subsystem 00:07:48.001 Calling clear_vhost_blk_subsystem 00:07:48.001 Calling clear_ublk_subsystem 00:07:48.001 Calling clear_nbd_subsystem 00:07:48.001 Calling clear_nvmf_subsystem 00:07:48.001 Calling clear_bdev_subsystem 00:07:48.001 Calling clear_accel_subsystem 00:07:48.001 Calling clear_iobuf_subsystem 00:07:48.001 Calling clear_sock_subsystem 00:07:48.001 Calling clear_vmd_subsystem 00:07:48.001 Calling clear_scheduler_subsystem 00:07:48.001 04:48:11 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:48.001 04:48:11 -- json_config/json_config.sh@396 -- # count=100 00:07:48.001 04:48:11 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:07:48.001 04:48:11 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:48.001 04:48:11 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:48.001 04:48:11 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:48.260 04:48:11 -- json_config/json_config.sh@398 -- # break 00:07:48.260 04:48:11 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:07:48.260 04:48:11 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:07:48.260 04:48:11 -- json_config/json_config.sh@120 -- # local app=target 00:07:48.260 04:48:11 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:07:48.260 04:48:11 -- json_config/json_config.sh@124 -- # [[ -n 60751 ]] 00:07:48.260 04:48:11 -- json_config/json_config.sh@127 -- # kill -SIGINT 60751 00:07:48.260 04:48:11 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:07:48.260 04:48:11 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:48.260 04:48:11 -- json_config/json_config.sh@130 -- # kill -0 60751 00:07:48.260 04:48:11 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:48.828 04:48:12 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:48.828 04:48:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:48.828 04:48:12 -- json_config/json_config.sh@130 -- # kill -0 60751 00:07:48.828 04:48:12 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:49.395 SPDK target shutdown done 00:07:49.395 INFO: relaunching applications... 00:07:49.395 04:48:12 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:49.395 04:48:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:49.395 04:48:12 -- json_config/json_config.sh@130 -- # kill -0 60751 00:07:49.395 04:48:12 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:07:49.395 04:48:12 -- json_config/json_config.sh@132 -- # break 00:07:49.395 04:48:12 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:07:49.395 04:48:12 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:07:49.395 04:48:12 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:07:49.395 04:48:12 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:49.395 04:48:12 -- json_config/json_config.sh@98 -- # local app=target 00:07:49.395 04:48:12 -- json_config/json_config.sh@99 -- # shift 00:07:49.395 04:48:12 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:49.395 04:48:12 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:49.395 04:48:12 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:49.395 04:48:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:49.396 04:48:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:49.396 04:48:12 -- json_config/json_config.sh@111 -- # app_pid[$app]=60995 00:07:49.396 04:48:12 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:49.396 04:48:12 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:49.396 Waiting for target to run... 00:07:49.396 04:48:12 -- json_config/json_config.sh@114 -- # waitforlisten 60995 /var/tmp/spdk_tgt.sock 00:07:49.396 04:48:12 -- common/autotest_common.sh@829 -- # '[' -z 60995 ']' 00:07:49.396 04:48:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:49.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:49.396 04:48:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.396 04:48:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:49.396 04:48:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.396 04:48:12 -- common/autotest_common.sh@10 -- # set +x 00:07:49.396 [2024-11-18 04:48:12.816034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.396 [2024-11-18 04:48:12.816200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60995 ] 00:07:49.655 [2024-11-18 04:48:13.144667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.914 [2024-11-18 04:48:13.304146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.914 [2024-11-18 04:48:13.304433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.479 [2024-11-18 04:48:13.950953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:50.479 [2024-11-18 04:48:13.951038] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:50.480 [2024-11-18 04:48:13.958925] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:50.480 [2024-11-18 04:48:13.958988] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:50.480 [2024-11-18 04:48:13.966949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.480 [2024-11-18 04:48:13.967006] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:50.480 [2024-11-18 04:48:13.967021] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:50.738 [2024-11-18 04:48:14.060320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.738 [2024-11-18 04:48:14.060381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.738 [2024-11-18 04:48:14.060404] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:07:50.738 [2024-11-18 04:48:14.060418] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.738 [2024-11-18 04:48:14.060969] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.738 [2024-11-18 04:48:14.061002] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:50.997 04:48:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.997 04:48:14 -- common/autotest_common.sh@862 -- # return 0 00:07:50.997 00:07:50.997 04:48:14 -- json_config/json_config.sh@115 -- # echo '' 00:07:50.997 04:48:14 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:07:50.997 INFO: Checking if target configuration is the same... 00:07:50.997 04:48:14 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:50.997 04:48:14 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:50.997 04:48:14 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:07:50.997 04:48:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:50.997 + '[' 2 -ne 2 ']' 00:07:50.997 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:50.997 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:50.997 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:50.997 +++ basename /dev/fd/62 00:07:50.997 ++ mktemp /tmp/62.XXX 00:07:50.997 + tmp_file_1=/tmp/62.6TC 00:07:50.997 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:50.997 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:50.997 + tmp_file_2=/tmp/spdk_tgt_config.json.Ew0 00:07:50.997 + ret=0 00:07:50.997 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:51.564 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:51.564 + diff -u /tmp/62.6TC /tmp/spdk_tgt_config.json.Ew0 00:07:51.564 INFO: JSON config files are the same 00:07:51.564 + echo 'INFO: JSON config files are the same' 00:07:51.564 + rm /tmp/62.6TC /tmp/spdk_tgt_config.json.Ew0 00:07:51.564 + exit 0 00:07:51.564 04:48:14 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:07:51.564 INFO: changing configuration and checking if this can be detected... 00:07:51.564 04:48:14 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:51.564 04:48:14 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:51.564 04:48:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:51.823 04:48:15 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:51.823 04:48:15 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:07:51.823 04:48:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:51.823 + '[' 2 -ne 2 ']' 00:07:51.823 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:51.823 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:51.823 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:51.823 +++ basename /dev/fd/62 00:07:51.823 ++ mktemp /tmp/62.XXX 00:07:51.823 + tmp_file_1=/tmp/62.vrR 00:07:51.823 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:51.823 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:51.823 + tmp_file_2=/tmp/spdk_tgt_config.json.GPT 00:07:51.823 + ret=0 00:07:51.823 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:52.081 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:52.081 + diff -u /tmp/62.vrR /tmp/spdk_tgt_config.json.GPT 00:07:52.082 + ret=1 00:07:52.082 + echo '=== Start of file: /tmp/62.vrR ===' 00:07:52.082 + cat /tmp/62.vrR 00:07:52.082 + echo '=== End of file: /tmp/62.vrR ===' 00:07:52.082 + echo '' 00:07:52.082 + echo '=== Start of file: /tmp/spdk_tgt_config.json.GPT ===' 00:07:52.082 + cat /tmp/spdk_tgt_config.json.GPT 00:07:52.082 + echo '=== End of file: /tmp/spdk_tgt_config.json.GPT ===' 00:07:52.082 + echo '' 00:07:52.082 + rm /tmp/62.vrR /tmp/spdk_tgt_config.json.GPT 00:07:52.082 + exit 1 00:07:52.341 INFO: configuration change detected. 00:07:52.341 04:48:15 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:07:52.341 04:48:15 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:07:52.341 04:48:15 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:07:52.341 04:48:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.341 04:48:15 -- common/autotest_common.sh@10 -- # set +x 00:07:52.341 04:48:15 -- json_config/json_config.sh@360 -- # local ret=0 00:07:52.341 04:48:15 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:07:52.341 04:48:15 -- json_config/json_config.sh@370 -- # [[ -n 60995 ]] 00:07:52.341 04:48:15 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:07:52.341 04:48:15 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:07:52.341 04:48:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.341 04:48:15 -- common/autotest_common.sh@10 -- # set +x 00:07:52.341 04:48:15 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:07:52.341 04:48:15 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:52.341 04:48:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:52.341 04:48:15 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:52.341 04:48:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:52.908 04:48:16 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:52.908 04:48:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:52.908 04:48:16 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:52.908 04:48:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:53.167 04:48:16 -- json_config/json_config.sh@246 -- # uname -s 00:07:53.167 04:48:16 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:07:53.167 04:48:16 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:07:53.167 04:48:16 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:07:53.167 04:48:16 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:07:53.167 04:48:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.167 04:48:16 -- common/autotest_common.sh@10 -- # set +x 00:07:53.167 04:48:16 -- json_config/json_config.sh@376 -- # killprocess 60995 00:07:53.167 04:48:16 -- common/autotest_common.sh@936 -- # '[' -z 60995 ']' 00:07:53.167 04:48:16 -- common/autotest_common.sh@940 -- # kill -0 60995 00:07:53.167 04:48:16 -- common/autotest_common.sh@941 -- # uname 00:07:53.167 04:48:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.167 04:48:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60995 00:07:53.167 04:48:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.167 04:48:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.167 killing process with pid 60995 00:07:53.167 04:48:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60995' 00:07:53.167 04:48:16 -- common/autotest_common.sh@955 -- # kill 60995 00:07:53.167 04:48:16 -- common/autotest_common.sh@960 -- # wait 60995 00:07:54.104 04:48:17 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:54.104 04:48:17 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:07:54.104 04:48:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.104 04:48:17 -- common/autotest_common.sh@10 -- # set +x 00:07:54.104 04:48:17 -- json_config/json_config.sh@381 -- # return 0 00:07:54.104 04:48:17 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:07:54.104 INFO: Success 00:07:54.104 ************************************ 00:07:54.104 END TEST json_config 00:07:54.104 ************************************ 00:07:54.104 00:07:54.104 real 0m13.438s 00:07:54.104 user 0m19.494s 00:07:54.104 sys 0m2.214s 00:07:54.104 04:48:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.104 04:48:17 -- common/autotest_common.sh@10 -- # set +x 00:07:54.366 04:48:17 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:54.366 04:48:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.366 04:48:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.366 04:48:17 -- common/autotest_common.sh@10 -- # set +x 00:07:54.366 ************************************ 00:07:54.366 START TEST json_config_extra_key 00:07:54.366 ************************************ 00:07:54.366 04:48:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:54.366 04:48:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:54.366 04:48:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:54.366 04:48:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:54.366 04:48:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:54.366 04:48:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:54.366 04:48:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:54.366 04:48:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:54.366 04:48:17 -- scripts/common.sh@335 -- # IFS=.-: 00:07:54.366 04:48:17 -- scripts/common.sh@335 -- # read -ra ver1 00:07:54.366 04:48:17 -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.366 04:48:17 -- scripts/common.sh@336 -- # read -ra ver2 00:07:54.366 04:48:17 -- scripts/common.sh@337 -- # local 'op=<' 00:07:54.366 04:48:17 -- scripts/common.sh@339 -- # ver1_l=2 00:07:54.366 04:48:17 -- scripts/common.sh@340 -- # ver2_l=1 00:07:54.366 04:48:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:54.366 04:48:17 -- scripts/common.sh@343 -- # case "$op" in 00:07:54.366 04:48:17 -- scripts/common.sh@344 -- # : 1 00:07:54.366 04:48:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:54.366 04:48:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.366 04:48:17 -- scripts/common.sh@364 -- # decimal 1 00:07:54.366 04:48:17 -- scripts/common.sh@352 -- # local d=1 00:07:54.366 04:48:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.366 04:48:17 -- scripts/common.sh@354 -- # echo 1 00:07:54.366 04:48:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:54.366 04:48:17 -- scripts/common.sh@365 -- # decimal 2 00:07:54.366 04:48:17 -- scripts/common.sh@352 -- # local d=2 00:07:54.366 04:48:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.366 04:48:17 -- scripts/common.sh@354 -- # echo 2 00:07:54.366 04:48:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:54.366 04:48:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:54.366 04:48:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:54.366 04:48:17 -- scripts/common.sh@367 -- # return 0 00:07:54.366 04:48:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.366 04:48:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:54.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.366 --rc genhtml_branch_coverage=1 00:07:54.366 --rc genhtml_function_coverage=1 00:07:54.366 --rc genhtml_legend=1 00:07:54.366 --rc geninfo_all_blocks=1 00:07:54.366 --rc geninfo_unexecuted_blocks=1 00:07:54.366 00:07:54.366 ' 00:07:54.366 04:48:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:54.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.366 --rc genhtml_branch_coverage=1 00:07:54.366 --rc genhtml_function_coverage=1 00:07:54.366 --rc genhtml_legend=1 00:07:54.366 --rc geninfo_all_blocks=1 00:07:54.366 --rc geninfo_unexecuted_blocks=1 00:07:54.366 00:07:54.366 ' 00:07:54.366 04:48:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:54.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.366 --rc genhtml_branch_coverage=1 00:07:54.366 --rc genhtml_function_coverage=1 00:07:54.366 --rc genhtml_legend=1 00:07:54.366 --rc geninfo_all_blocks=1 00:07:54.366 --rc geninfo_unexecuted_blocks=1 00:07:54.366 00:07:54.366 ' 00:07:54.366 04:48:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:54.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.366 --rc genhtml_branch_coverage=1 00:07:54.366 --rc genhtml_function_coverage=1 00:07:54.366 --rc genhtml_legend=1 00:07:54.366 --rc geninfo_all_blocks=1 00:07:54.366 --rc geninfo_unexecuted_blocks=1 00:07:54.366 00:07:54.366 ' 00:07:54.366 04:48:17 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.366 04:48:17 -- nvmf/common.sh@7 -- # uname -s 00:07:54.366 04:48:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.366 04:48:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.366 04:48:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.366 04:48:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.366 04:48:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.366 04:48:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.366 04:48:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.366 04:48:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.366 04:48:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.366 04:48:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.366 04:48:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7e74b746-ded7-4dde-a22d-3af59a1bbf22 00:07:54.366 04:48:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=7e74b746-ded7-4dde-a22d-3af59a1bbf22 00:07:54.366 04:48:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.366 04:48:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.366 04:48:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:54.366 04:48:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.366 04:48:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.366 04:48:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.366 04:48:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.366 04:48:17 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:54.366 04:48:17 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:54.366 04:48:17 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:54.366 04:48:17 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:54.366 04:48:17 -- paths/export.sh@6 -- # export PATH 00:07:54.366 04:48:17 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:54.366 04:48:17 -- nvmf/common.sh@46 -- # : 0 00:07:54.366 04:48:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:54.366 04:48:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:54.366 04:48:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:54.366 04:48:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.366 04:48:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.366 04:48:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:54.366 04:48:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:54.366 04:48:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:54.366 04:48:17 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:07:54.366 04:48:17 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:07:54.367 INFO: launching applications... 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@25 -- # shift 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=61173 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:07:54.367 Waiting for target to run... 00:07:54.367 04:48:17 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 61173 /var/tmp/spdk_tgt.sock 00:07:54.367 04:48:17 -- common/autotest_common.sh@829 -- # '[' -z 61173 ']' 00:07:54.367 04:48:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:54.367 04:48:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.367 04:48:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:54.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:54.367 04:48:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.367 04:48:17 -- common/autotest_common.sh@10 -- # set +x 00:07:54.642 [2024-11-18 04:48:17.913908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.642 [2024-11-18 04:48:17.914288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61173 ] 00:07:54.910 [2024-11-18 04:48:18.276785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.910 [2024-11-18 04:48:18.423679] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.910 [2024-11-18 04:48:18.423914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.290 00:07:56.290 INFO: shutting down applications... 00:07:56.290 04:48:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.290 04:48:19 -- common/autotest_common.sh@862 -- # return 0 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 61173 ]] 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 61173 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61173 00:07:56.290 04:48:19 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:56.858 04:48:20 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:56.858 04:48:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:56.858 04:48:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61173 00:07:56.858 04:48:20 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:57.117 04:48:20 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:57.117 04:48:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:57.117 04:48:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61173 00:07:57.117 04:48:20 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:57.685 04:48:21 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:57.685 04:48:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:57.685 04:48:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61173 00:07:57.685 04:48:21 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:58.253 04:48:21 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:58.253 04:48:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:58.253 04:48:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61173 00:07:58.253 04:48:21 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:58.822 04:48:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:58.822 04:48:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:58.822 04:48:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61173 00:07:58.822 04:48:22 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:07:58.822 SPDK target shutdown done 00:07:58.822 04:48:22 -- json_config/json_config_extra_key.sh@52 -- # break 00:07:58.822 04:48:22 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:07:58.822 04:48:22 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:07:58.822 Success 00:07:58.822 04:48:22 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:07:58.822 ************************************ 00:07:58.822 END TEST json_config_extra_key 00:07:58.822 ************************************ 00:07:58.822 00:07:58.822 real 0m4.439s 00:07:58.822 user 0m4.162s 00:07:58.822 sys 0m0.597s 00:07:58.822 04:48:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.822 04:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:58.822 04:48:22 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:58.822 04:48:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.822 04:48:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.822 04:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:58.822 ************************************ 00:07:58.822 START TEST alias_rpc 00:07:58.822 ************************************ 00:07:58.822 04:48:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:58.822 * Looking for test storage... 00:07:58.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:58.822 04:48:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:58.822 04:48:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:58.822 04:48:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:58.822 04:48:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:58.822 04:48:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:58.822 04:48:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:58.822 04:48:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:58.822 04:48:22 -- scripts/common.sh@335 -- # IFS=.-: 00:07:58.822 04:48:22 -- scripts/common.sh@335 -- # read -ra ver1 00:07:58.822 04:48:22 -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.822 04:48:22 -- scripts/common.sh@336 -- # read -ra ver2 00:07:58.822 04:48:22 -- scripts/common.sh@337 -- # local 'op=<' 00:07:58.822 04:48:22 -- scripts/common.sh@339 -- # ver1_l=2 00:07:58.822 04:48:22 -- scripts/common.sh@340 -- # ver2_l=1 00:07:58.822 04:48:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:58.822 04:48:22 -- scripts/common.sh@343 -- # case "$op" in 00:07:58.822 04:48:22 -- scripts/common.sh@344 -- # : 1 00:07:58.822 04:48:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:58.822 04:48:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.822 04:48:22 -- scripts/common.sh@364 -- # decimal 1 00:07:58.822 04:48:22 -- scripts/common.sh@352 -- # local d=1 00:07:58.822 04:48:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.822 04:48:22 -- scripts/common.sh@354 -- # echo 1 00:07:58.822 04:48:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:58.822 04:48:22 -- scripts/common.sh@365 -- # decimal 2 00:07:58.822 04:48:22 -- scripts/common.sh@352 -- # local d=2 00:07:58.822 04:48:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.822 04:48:22 -- scripts/common.sh@354 -- # echo 2 00:07:58.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.822 04:48:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:58.822 04:48:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:58.822 04:48:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:58.822 04:48:22 -- scripts/common.sh@367 -- # return 0 00:07:58.822 04:48:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.822 04:48:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:58.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.822 --rc genhtml_branch_coverage=1 00:07:58.822 --rc genhtml_function_coverage=1 00:07:58.822 --rc genhtml_legend=1 00:07:58.822 --rc geninfo_all_blocks=1 00:07:58.822 --rc geninfo_unexecuted_blocks=1 00:07:58.822 00:07:58.822 ' 00:07:58.822 04:48:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:58.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.822 --rc genhtml_branch_coverage=1 00:07:58.822 --rc genhtml_function_coverage=1 00:07:58.822 --rc genhtml_legend=1 00:07:58.822 --rc geninfo_all_blocks=1 00:07:58.822 --rc geninfo_unexecuted_blocks=1 00:07:58.822 00:07:58.822 ' 00:07:58.822 04:48:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:58.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.822 --rc genhtml_branch_coverage=1 00:07:58.822 --rc genhtml_function_coverage=1 00:07:58.822 --rc genhtml_legend=1 00:07:58.822 --rc geninfo_all_blocks=1 00:07:58.822 --rc geninfo_unexecuted_blocks=1 00:07:58.822 00:07:58.822 ' 00:07:58.822 04:48:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:58.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.822 --rc genhtml_branch_coverage=1 00:07:58.822 --rc genhtml_function_coverage=1 00:07:58.822 --rc genhtml_legend=1 00:07:58.822 --rc geninfo_all_blocks=1 00:07:58.822 --rc geninfo_unexecuted_blocks=1 00:07:58.822 00:07:58.822 ' 00:07:58.822 04:48:22 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:58.822 04:48:22 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61285 00:07:58.822 04:48:22 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61285 00:07:58.822 04:48:22 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:58.822 04:48:22 -- common/autotest_common.sh@829 -- # '[' -z 61285 ']' 00:07:58.822 04:48:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.822 04:48:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.822 04:48:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.822 04:48:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.822 04:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:59.082 [2024-11-18 04:48:22.407216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.082 [2024-11-18 04:48:22.407374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61285 ] 00:07:59.082 [2024-11-18 04:48:22.577957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.341 [2024-11-18 04:48:22.747767] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:59.341 [2024-11-18 04:48:22.748428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.719 04:48:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.719 04:48:24 -- common/autotest_common.sh@862 -- # return 0 00:08:00.719 04:48:24 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:00.979 04:48:24 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61285 00:08:00.979 04:48:24 -- common/autotest_common.sh@936 -- # '[' -z 61285 ']' 00:08:00.979 04:48:24 -- common/autotest_common.sh@940 -- # kill -0 61285 00:08:00.979 04:48:24 -- common/autotest_common.sh@941 -- # uname 00:08:00.979 04:48:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:00.979 04:48:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61285 00:08:00.979 killing process with pid 61285 00:08:00.979 04:48:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:00.979 04:48:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:00.979 04:48:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61285' 00:08:00.979 04:48:24 -- common/autotest_common.sh@955 -- # kill 61285 00:08:00.979 04:48:24 -- common/autotest_common.sh@960 -- # wait 61285 00:08:03.514 ************************************ 00:08:03.514 END TEST alias_rpc 00:08:03.514 ************************************ 00:08:03.514 00:08:03.514 real 0m4.577s 00:08:03.514 user 0m4.918s 00:08:03.514 sys 0m0.549s 00:08:03.514 04:48:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.514 04:48:26 -- common/autotest_common.sh@10 -- # set +x 00:08:03.514 04:48:26 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:08:03.514 04:48:26 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:03.514 04:48:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:03.514 04:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.514 04:48:26 -- common/autotest_common.sh@10 -- # set +x 00:08:03.514 ************************************ 00:08:03.514 START TEST spdkcli_tcp 00:08:03.514 ************************************ 00:08:03.514 04:48:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:03.514 * Looking for test storage... 00:08:03.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:03.514 04:48:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:03.514 04:48:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:03.514 04:48:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:03.514 04:48:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:03.514 04:48:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:03.514 04:48:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:03.514 04:48:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:03.514 04:48:26 -- scripts/common.sh@335 -- # IFS=.-: 00:08:03.514 04:48:26 -- scripts/common.sh@335 -- # read -ra ver1 00:08:03.514 04:48:26 -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.514 04:48:26 -- scripts/common.sh@336 -- # read -ra ver2 00:08:03.514 04:48:26 -- scripts/common.sh@337 -- # local 'op=<' 00:08:03.514 04:48:26 -- scripts/common.sh@339 -- # ver1_l=2 00:08:03.514 04:48:26 -- scripts/common.sh@340 -- # ver2_l=1 00:08:03.514 04:48:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:03.514 04:48:26 -- scripts/common.sh@343 -- # case "$op" in 00:08:03.514 04:48:26 -- scripts/common.sh@344 -- # : 1 00:08:03.514 04:48:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:03.514 04:48:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.514 04:48:26 -- scripts/common.sh@364 -- # decimal 1 00:08:03.514 04:48:26 -- scripts/common.sh@352 -- # local d=1 00:08:03.514 04:48:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.514 04:48:26 -- scripts/common.sh@354 -- # echo 1 00:08:03.514 04:48:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:03.514 04:48:26 -- scripts/common.sh@365 -- # decimal 2 00:08:03.514 04:48:26 -- scripts/common.sh@352 -- # local d=2 00:08:03.514 04:48:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.514 04:48:26 -- scripts/common.sh@354 -- # echo 2 00:08:03.514 04:48:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:03.514 04:48:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:03.514 04:48:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:03.514 04:48:26 -- scripts/common.sh@367 -- # return 0 00:08:03.514 04:48:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.514 04:48:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:03.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.514 --rc genhtml_branch_coverage=1 00:08:03.514 --rc genhtml_function_coverage=1 00:08:03.514 --rc genhtml_legend=1 00:08:03.514 --rc geninfo_all_blocks=1 00:08:03.514 --rc geninfo_unexecuted_blocks=1 00:08:03.514 00:08:03.514 ' 00:08:03.514 04:48:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:03.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.515 --rc genhtml_branch_coverage=1 00:08:03.515 --rc genhtml_function_coverage=1 00:08:03.515 --rc genhtml_legend=1 00:08:03.515 --rc geninfo_all_blocks=1 00:08:03.515 --rc geninfo_unexecuted_blocks=1 00:08:03.515 00:08:03.515 ' 00:08:03.515 04:48:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:03.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.515 --rc genhtml_branch_coverage=1 00:08:03.515 --rc genhtml_function_coverage=1 00:08:03.515 --rc genhtml_legend=1 00:08:03.515 --rc geninfo_all_blocks=1 00:08:03.515 --rc geninfo_unexecuted_blocks=1 00:08:03.515 00:08:03.515 ' 00:08:03.515 04:48:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:03.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.515 --rc genhtml_branch_coverage=1 00:08:03.515 --rc genhtml_function_coverage=1 00:08:03.515 --rc genhtml_legend=1 00:08:03.515 --rc geninfo_all_blocks=1 00:08:03.515 --rc geninfo_unexecuted_blocks=1 00:08:03.515 00:08:03.515 ' 00:08:03.515 04:48:26 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:03.515 04:48:26 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:03.515 04:48:26 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:03.515 04:48:26 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:03.515 04:48:26 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:03.515 04:48:26 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:03.515 04:48:26 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:03.515 04:48:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.515 04:48:26 -- common/autotest_common.sh@10 -- # set +x 00:08:03.515 04:48:26 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=61398 00:08:03.515 04:48:26 -- spdkcli/tcp.sh@27 -- # waitforlisten 61398 00:08:03.515 04:48:26 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:03.515 04:48:26 -- common/autotest_common.sh@829 -- # '[' -z 61398 ']' 00:08:03.515 04:48:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.515 04:48:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.515 04:48:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.515 04:48:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.515 04:48:26 -- common/autotest_common.sh@10 -- # set +x 00:08:03.775 [2024-11-18 04:48:27.049246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.775 [2024-11-18 04:48:27.049629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61398 ] 00:08:03.775 [2024-11-18 04:48:27.226318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.034 [2024-11-18 04:48:27.437920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.034 [2024-11-18 04:48:27.438609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.034 [2024-11-18 04:48:27.438730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.412 04:48:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.412 04:48:28 -- common/autotest_common.sh@862 -- # return 0 00:08:05.412 04:48:28 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:05.412 04:48:28 -- spdkcli/tcp.sh@31 -- # socat_pid=61423 00:08:05.412 04:48:28 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:05.672 [ 00:08:05.672 "spdk_get_version", 00:08:05.672 "rpc_get_methods", 00:08:05.672 "trace_get_info", 00:08:05.672 "trace_get_tpoint_group_mask", 00:08:05.672 "trace_disable_tpoint_group", 00:08:05.672 "trace_enable_tpoint_group", 00:08:05.672 "trace_clear_tpoint_mask", 00:08:05.672 "trace_set_tpoint_mask", 00:08:05.672 "framework_get_pci_devices", 00:08:05.672 "framework_get_config", 00:08:05.672 "framework_get_subsystems", 00:08:05.672 "iobuf_get_stats", 00:08:05.672 "iobuf_set_options", 00:08:05.672 "sock_set_default_impl", 00:08:05.672 "sock_impl_set_options", 00:08:05.672 "sock_impl_get_options", 00:08:05.672 "vmd_rescan", 00:08:05.672 "vmd_remove_device", 00:08:05.672 "vmd_enable", 00:08:05.672 "accel_get_stats", 00:08:05.672 "accel_set_options", 00:08:05.672 "accel_set_driver", 00:08:05.672 "accel_crypto_key_destroy", 00:08:05.672 "accel_crypto_keys_get", 00:08:05.672 "accel_crypto_key_create", 00:08:05.672 "accel_assign_opc", 00:08:05.672 "accel_get_module_info", 00:08:05.672 "accel_get_opc_assignments", 00:08:05.672 "notify_get_notifications", 00:08:05.672 "notify_get_types", 00:08:05.672 "bdev_get_histogram", 00:08:05.672 "bdev_enable_histogram", 00:08:05.672 "bdev_set_qos_limit", 00:08:05.672 "bdev_set_qd_sampling_period", 00:08:05.672 "bdev_get_bdevs", 00:08:05.672 "bdev_reset_iostat", 00:08:05.672 "bdev_get_iostat", 00:08:05.672 "bdev_examine", 00:08:05.672 "bdev_wait_for_examine", 00:08:05.672 "bdev_set_options", 00:08:05.672 "scsi_get_devices", 00:08:05.672 "thread_set_cpumask", 00:08:05.672 "framework_get_scheduler", 00:08:05.672 "framework_set_scheduler", 00:08:05.672 "framework_get_reactors", 00:08:05.672 "thread_get_io_channels", 00:08:05.672 "thread_get_pollers", 00:08:05.672 "thread_get_stats", 00:08:05.672 "framework_monitor_context_switch", 00:08:05.672 "spdk_kill_instance", 00:08:05.672 "log_enable_timestamps", 00:08:05.672 "log_get_flags", 00:08:05.672 "log_clear_flag", 00:08:05.672 "log_set_flag", 00:08:05.672 "log_get_level", 00:08:05.672 "log_set_level", 00:08:05.672 "log_get_print_level", 00:08:05.672 "log_set_print_level", 00:08:05.672 "framework_enable_cpumask_locks", 00:08:05.672 "framework_disable_cpumask_locks", 00:08:05.672 "framework_wait_init", 00:08:05.672 "framework_start_init", 00:08:05.672 "virtio_blk_create_transport", 00:08:05.672 "virtio_blk_get_transports", 00:08:05.672 "vhost_controller_set_coalescing", 00:08:05.672 "vhost_get_controllers", 00:08:05.672 "vhost_delete_controller", 00:08:05.672 "vhost_create_blk_controller", 00:08:05.672 "vhost_scsi_controller_remove_target", 00:08:05.672 "vhost_scsi_controller_add_target", 00:08:05.672 "vhost_start_scsi_controller", 00:08:05.672 "vhost_create_scsi_controller", 00:08:05.672 "ublk_recover_disk", 00:08:05.672 "ublk_get_disks", 00:08:05.672 "ublk_stop_disk", 00:08:05.672 "ublk_start_disk", 00:08:05.672 "ublk_destroy_target", 00:08:05.672 "ublk_create_target", 00:08:05.672 "nbd_get_disks", 00:08:05.672 "nbd_stop_disk", 00:08:05.672 "nbd_start_disk", 00:08:05.672 "env_dpdk_get_mem_stats", 00:08:05.672 "nvmf_subsystem_get_listeners", 00:08:05.672 "nvmf_subsystem_get_qpairs", 00:08:05.672 "nvmf_subsystem_get_controllers", 00:08:05.672 "nvmf_get_stats", 00:08:05.672 "nvmf_get_transports", 00:08:05.672 "nvmf_create_transport", 00:08:05.672 "nvmf_get_targets", 00:08:05.672 "nvmf_delete_target", 00:08:05.672 "nvmf_create_target", 00:08:05.672 "nvmf_subsystem_allow_any_host", 00:08:05.672 "nvmf_subsystem_remove_host", 00:08:05.672 "nvmf_subsystem_add_host", 00:08:05.672 "nvmf_subsystem_remove_ns", 00:08:05.672 "nvmf_subsystem_add_ns", 00:08:05.672 "nvmf_subsystem_listener_set_ana_state", 00:08:05.672 "nvmf_discovery_get_referrals", 00:08:05.672 "nvmf_discovery_remove_referral", 00:08:05.672 "nvmf_discovery_add_referral", 00:08:05.672 "nvmf_subsystem_remove_listener", 00:08:05.672 "nvmf_subsystem_add_listener", 00:08:05.672 "nvmf_delete_subsystem", 00:08:05.672 "nvmf_create_subsystem", 00:08:05.672 "nvmf_get_subsystems", 00:08:05.672 "nvmf_set_crdt", 00:08:05.672 "nvmf_set_config", 00:08:05.672 "nvmf_set_max_subsystems", 00:08:05.672 "iscsi_set_options", 00:08:05.672 "iscsi_get_auth_groups", 00:08:05.672 "iscsi_auth_group_remove_secret", 00:08:05.672 "iscsi_auth_group_add_secret", 00:08:05.672 "iscsi_delete_auth_group", 00:08:05.672 "iscsi_create_auth_group", 00:08:05.672 "iscsi_set_discovery_auth", 00:08:05.672 "iscsi_get_options", 00:08:05.672 "iscsi_target_node_request_logout", 00:08:05.672 "iscsi_target_node_set_redirect", 00:08:05.672 "iscsi_target_node_set_auth", 00:08:05.672 "iscsi_target_node_add_lun", 00:08:05.672 "iscsi_get_connections", 00:08:05.672 "iscsi_portal_group_set_auth", 00:08:05.672 "iscsi_start_portal_group", 00:08:05.672 "iscsi_delete_portal_group", 00:08:05.672 "iscsi_create_portal_group", 00:08:05.672 "iscsi_get_portal_groups", 00:08:05.672 "iscsi_delete_target_node", 00:08:05.672 "iscsi_target_node_remove_pg_ig_maps", 00:08:05.672 "iscsi_target_node_add_pg_ig_maps", 00:08:05.672 "iscsi_create_target_node", 00:08:05.672 "iscsi_get_target_nodes", 00:08:05.672 "iscsi_delete_initiator_group", 00:08:05.672 "iscsi_initiator_group_remove_initiators", 00:08:05.672 "iscsi_initiator_group_add_initiators", 00:08:05.672 "iscsi_create_initiator_group", 00:08:05.672 "iscsi_get_initiator_groups", 00:08:05.672 "iaa_scan_accel_module", 00:08:05.672 "dsa_scan_accel_module", 00:08:05.672 "ioat_scan_accel_module", 00:08:05.672 "accel_error_inject_error", 00:08:05.672 "bdev_iscsi_delete", 00:08:05.672 "bdev_iscsi_create", 00:08:05.672 "bdev_iscsi_set_options", 00:08:05.672 "bdev_virtio_attach_controller", 00:08:05.672 "bdev_virtio_scsi_get_devices", 00:08:05.672 "bdev_virtio_detach_controller", 00:08:05.672 "bdev_virtio_blk_set_hotplug", 00:08:05.672 "bdev_ftl_set_property", 00:08:05.672 "bdev_ftl_get_properties", 00:08:05.672 "bdev_ftl_get_stats", 00:08:05.672 "bdev_ftl_unmap", 00:08:05.672 "bdev_ftl_unload", 00:08:05.672 "bdev_ftl_delete", 00:08:05.672 "bdev_ftl_load", 00:08:05.672 "bdev_ftl_create", 00:08:05.672 "bdev_aio_delete", 00:08:05.672 "bdev_aio_rescan", 00:08:05.672 "bdev_aio_create", 00:08:05.672 "blobfs_create", 00:08:05.672 "blobfs_detect", 00:08:05.672 "blobfs_set_cache_size", 00:08:05.672 "bdev_zone_block_delete", 00:08:05.672 "bdev_zone_block_create", 00:08:05.672 "bdev_delay_delete", 00:08:05.672 "bdev_delay_create", 00:08:05.672 "bdev_delay_update_latency", 00:08:05.672 "bdev_split_delete", 00:08:05.672 "bdev_split_create", 00:08:05.672 "bdev_error_inject_error", 00:08:05.672 "bdev_error_delete", 00:08:05.672 "bdev_error_create", 00:08:05.672 "bdev_raid_set_options", 00:08:05.672 "bdev_raid_remove_base_bdev", 00:08:05.672 "bdev_raid_add_base_bdev", 00:08:05.672 "bdev_raid_delete", 00:08:05.672 "bdev_raid_create", 00:08:05.672 "bdev_raid_get_bdevs", 00:08:05.672 "bdev_lvol_grow_lvstore", 00:08:05.672 "bdev_lvol_get_lvols", 00:08:05.672 "bdev_lvol_get_lvstores", 00:08:05.672 "bdev_lvol_delete", 00:08:05.672 "bdev_lvol_set_read_only", 00:08:05.672 "bdev_lvol_resize", 00:08:05.672 "bdev_lvol_decouple_parent", 00:08:05.672 "bdev_lvol_inflate", 00:08:05.672 "bdev_lvol_rename", 00:08:05.672 "bdev_lvol_clone_bdev", 00:08:05.672 "bdev_lvol_clone", 00:08:05.672 "bdev_lvol_snapshot", 00:08:05.672 "bdev_lvol_create", 00:08:05.672 "bdev_lvol_delete_lvstore", 00:08:05.672 "bdev_lvol_rename_lvstore", 00:08:05.672 "bdev_lvol_create_lvstore", 00:08:05.672 "bdev_passthru_delete", 00:08:05.672 "bdev_passthru_create", 00:08:05.672 "bdev_nvme_cuse_unregister", 00:08:05.672 "bdev_nvme_cuse_register", 00:08:05.672 "bdev_opal_new_user", 00:08:05.672 "bdev_opal_set_lock_state", 00:08:05.672 "bdev_opal_delete", 00:08:05.672 "bdev_opal_get_info", 00:08:05.672 "bdev_opal_create", 00:08:05.672 "bdev_nvme_opal_revert", 00:08:05.672 "bdev_nvme_opal_init", 00:08:05.672 "bdev_nvme_send_cmd", 00:08:05.672 "bdev_nvme_get_path_iostat", 00:08:05.672 "bdev_nvme_get_mdns_discovery_info", 00:08:05.672 "bdev_nvme_stop_mdns_discovery", 00:08:05.672 "bdev_nvme_start_mdns_discovery", 00:08:05.672 "bdev_nvme_set_multipath_policy", 00:08:05.673 "bdev_nvme_set_preferred_path", 00:08:05.673 "bdev_nvme_get_io_paths", 00:08:05.673 "bdev_nvme_remove_error_injection", 00:08:05.673 "bdev_nvme_add_error_injection", 00:08:05.673 "bdev_nvme_get_discovery_info", 00:08:05.673 "bdev_nvme_stop_discovery", 00:08:05.673 "bdev_nvme_start_discovery", 00:08:05.673 "bdev_nvme_get_controller_health_info", 00:08:05.673 "bdev_nvme_disable_controller", 00:08:05.673 "bdev_nvme_enable_controller", 00:08:05.673 "bdev_nvme_reset_controller", 00:08:05.673 "bdev_nvme_get_transport_statistics", 00:08:05.673 "bdev_nvme_apply_firmware", 00:08:05.673 "bdev_nvme_detach_controller", 00:08:05.673 "bdev_nvme_get_controllers", 00:08:05.673 "bdev_nvme_attach_controller", 00:08:05.673 "bdev_nvme_set_hotplug", 00:08:05.673 "bdev_nvme_set_options", 00:08:05.673 "bdev_null_resize", 00:08:05.673 "bdev_null_delete", 00:08:05.673 "bdev_null_create", 00:08:05.673 "bdev_malloc_delete", 00:08:05.673 "bdev_malloc_create" 00:08:05.673 ] 00:08:05.673 04:48:29 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:05.673 04:48:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.673 04:48:29 -- common/autotest_common.sh@10 -- # set +x 00:08:05.673 04:48:29 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:05.673 04:48:29 -- spdkcli/tcp.sh@38 -- # killprocess 61398 00:08:05.673 04:48:29 -- common/autotest_common.sh@936 -- # '[' -z 61398 ']' 00:08:05.673 04:48:29 -- common/autotest_common.sh@940 -- # kill -0 61398 00:08:05.673 04:48:29 -- common/autotest_common.sh@941 -- # uname 00:08:05.673 04:48:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:05.673 04:48:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61398 00:08:05.673 killing process with pid 61398 00:08:05.673 04:48:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:05.673 04:48:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:05.673 04:48:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61398' 00:08:05.673 04:48:29 -- common/autotest_common.sh@955 -- # kill 61398 00:08:05.673 04:48:29 -- common/autotest_common.sh@960 -- # wait 61398 00:08:08.206 ************************************ 00:08:08.207 END TEST spdkcli_tcp 00:08:08.207 ************************************ 00:08:08.207 00:08:08.207 real 0m4.699s 00:08:08.207 user 0m8.637s 00:08:08.207 sys 0m0.672s 00:08:08.207 04:48:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.207 04:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:08.207 04:48:31 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:08.207 04:48:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.207 04:48:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.207 04:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:08.207 ************************************ 00:08:08.207 START TEST dpdk_mem_utility 00:08:08.207 ************************************ 00:08:08.207 04:48:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:08.207 * Looking for test storage... 00:08:08.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:08.207 04:48:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:08.207 04:48:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:08.207 04:48:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:08.207 04:48:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:08.207 04:48:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:08.207 04:48:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:08.207 04:48:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:08.207 04:48:31 -- scripts/common.sh@335 -- # IFS=.-: 00:08:08.207 04:48:31 -- scripts/common.sh@335 -- # read -ra ver1 00:08:08.207 04:48:31 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.207 04:48:31 -- scripts/common.sh@336 -- # read -ra ver2 00:08:08.207 04:48:31 -- scripts/common.sh@337 -- # local 'op=<' 00:08:08.207 04:48:31 -- scripts/common.sh@339 -- # ver1_l=2 00:08:08.207 04:48:31 -- scripts/common.sh@340 -- # ver2_l=1 00:08:08.207 04:48:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:08.207 04:48:31 -- scripts/common.sh@343 -- # case "$op" in 00:08:08.207 04:48:31 -- scripts/common.sh@344 -- # : 1 00:08:08.207 04:48:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:08.207 04:48:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.207 04:48:31 -- scripts/common.sh@364 -- # decimal 1 00:08:08.207 04:48:31 -- scripts/common.sh@352 -- # local d=1 00:08:08.207 04:48:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.207 04:48:31 -- scripts/common.sh@354 -- # echo 1 00:08:08.207 04:48:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:08.207 04:48:31 -- scripts/common.sh@365 -- # decimal 2 00:08:08.207 04:48:31 -- scripts/common.sh@352 -- # local d=2 00:08:08.207 04:48:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.207 04:48:31 -- scripts/common.sh@354 -- # echo 2 00:08:08.207 04:48:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:08.207 04:48:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:08.207 04:48:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:08.207 04:48:31 -- scripts/common.sh@367 -- # return 0 00:08:08.207 04:48:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.207 04:48:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.207 --rc genhtml_branch_coverage=1 00:08:08.207 --rc genhtml_function_coverage=1 00:08:08.207 --rc genhtml_legend=1 00:08:08.207 --rc geninfo_all_blocks=1 00:08:08.207 --rc geninfo_unexecuted_blocks=1 00:08:08.207 00:08:08.207 ' 00:08:08.207 04:48:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.207 --rc genhtml_branch_coverage=1 00:08:08.207 --rc genhtml_function_coverage=1 00:08:08.207 --rc genhtml_legend=1 00:08:08.207 --rc geninfo_all_blocks=1 00:08:08.207 --rc geninfo_unexecuted_blocks=1 00:08:08.207 00:08:08.207 ' 00:08:08.207 04:48:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.207 --rc genhtml_branch_coverage=1 00:08:08.207 --rc genhtml_function_coverage=1 00:08:08.207 --rc genhtml_legend=1 00:08:08.207 --rc geninfo_all_blocks=1 00:08:08.207 --rc geninfo_unexecuted_blocks=1 00:08:08.207 00:08:08.207 ' 00:08:08.207 04:48:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.207 --rc genhtml_branch_coverage=1 00:08:08.207 --rc genhtml_function_coverage=1 00:08:08.207 --rc genhtml_legend=1 00:08:08.207 --rc geninfo_all_blocks=1 00:08:08.207 --rc geninfo_unexecuted_blocks=1 00:08:08.207 00:08:08.207 ' 00:08:08.207 04:48:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:08.207 04:48:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61527 00:08:08.207 04:48:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61527 00:08:08.207 04:48:31 -- common/autotest_common.sh@829 -- # '[' -z 61527 ']' 00:08:08.207 04:48:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.207 04:48:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:08.207 04:48:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.207 04:48:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.207 04:48:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.207 04:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:08.465 [2024-11-18 04:48:31.793043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.465 [2024-11-18 04:48:31.793244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61527 ] 00:08:08.465 [2024-11-18 04:48:31.968587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.724 [2024-11-18 04:48:32.220687] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:08.724 [2024-11-18 04:48:32.220965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.111 04:48:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.111 04:48:33 -- common/autotest_common.sh@862 -- # return 0 00:08:10.111 04:48:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:10.111 04:48:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:10.111 04:48:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.111 04:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:10.111 { 00:08:10.111 "filename": "/tmp/spdk_mem_dump.txt" 00:08:10.111 } 00:08:10.111 04:48:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.111 04:48:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:10.111 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:10.111 1 heaps totaling size 820.000000 MiB 00:08:10.111 size: 820.000000 MiB heap id: 0 00:08:10.111 end heaps---------- 00:08:10.111 8 mempools totaling size 598.116089 MiB 00:08:10.111 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:10.111 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:10.111 size: 84.521057 MiB name: bdev_io_61527 00:08:10.111 size: 51.011292 MiB name: evtpool_61527 00:08:10.111 size: 50.003479 MiB name: msgpool_61527 00:08:10.111 size: 21.763794 MiB name: PDU_Pool 00:08:10.111 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:10.111 size: 0.026123 MiB name: Session_Pool 00:08:10.111 end mempools------- 00:08:10.111 6 memzones totaling size 4.142822 MiB 00:08:10.111 size: 1.000366 MiB name: RG_ring_0_61527 00:08:10.111 size: 1.000366 MiB name: RG_ring_1_61527 00:08:10.111 size: 1.000366 MiB name: RG_ring_4_61527 00:08:10.111 size: 1.000366 MiB name: RG_ring_5_61527 00:08:10.111 size: 0.125366 MiB name: RG_ring_2_61527 00:08:10.111 size: 0.015991 MiB name: RG_ring_3_61527 00:08:10.111 end memzones------- 00:08:10.111 04:48:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:10.379 heap id: 0 total size: 820.000000 MiB number of busy elements: 305 number of free elements: 18 00:08:10.379 list of free elements. size: 18.450317 MiB 00:08:10.379 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:10.379 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:10.379 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:10.379 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:10.379 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:10.379 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:10.379 element at address: 0x200019600000 with size: 0.999084 MiB 00:08:10.379 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:10.379 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:10.379 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:10.379 element at address: 0x200019900040 with size: 0.936401 MiB 00:08:10.379 element at address: 0x200000200000 with size: 0.829224 MiB 00:08:10.379 element at address: 0x20001b000000 with size: 0.563904 MiB 00:08:10.379 element at address: 0x200019200000 with size: 0.487976 MiB 00:08:10.379 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:10.379 element at address: 0x200013800000 with size: 0.467651 MiB 00:08:10.379 element at address: 0x200028400000 with size: 0.390442 MiB 00:08:10.379 element at address: 0x200003a00000 with size: 0.351990 MiB 00:08:10.379 list of standard malloc elements. size: 199.285278 MiB 00:08:10.379 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:10.379 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:10.379 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:10.379 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:10.379 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:10.379 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:10.379 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:10.379 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:10.379 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:08:10.379 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:08:10.379 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:10.379 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:08:10.379 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013877b80 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013877c80 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:10.380 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:10.381 element at address: 0x200028463f40 with size: 0.000244 MiB 00:08:10.381 element at address: 0x200028464040 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846af80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846b080 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846b180 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846b280 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846b380 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846b480 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846b580 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846b680 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846b780 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846b880 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846b980 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846be80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846c080 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846c180 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846c280 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846c380 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846c480 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846c580 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846c680 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846c780 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846c880 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846c980 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846d080 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846d180 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846d280 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846d380 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846d480 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846d580 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:10.381 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:10.381 list of memzone associated elements. size: 602.264404 MiB 00:08:10.381 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:10.381 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:10.381 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:10.381 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:10.381 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:10.381 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61527_0 00:08:10.381 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:10.381 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61527_0 00:08:10.381 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:10.381 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61527_0 00:08:10.381 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:10.381 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:10.381 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:10.381 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:10.381 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:10.381 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61527 00:08:10.381 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:10.381 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61527 00:08:10.381 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:10.381 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61527 00:08:10.381 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:10.381 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:10.381 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:10.381 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:10.381 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:10.381 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:10.381 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:10.381 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:10.381 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:10.381 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61527 00:08:10.381 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:10.381 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61527 00:08:10.381 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:10.381 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61527 00:08:10.381 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:10.381 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61527 00:08:10.381 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:10.381 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61527 00:08:10.381 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:10.381 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:10.381 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:10.381 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:10.382 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:10.382 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:10.382 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:10.382 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61527 00:08:10.382 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:10.382 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:10.382 element at address: 0x200028464140 with size: 0.023804 MiB 00:08:10.382 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:10.382 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:10.382 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61527 00:08:10.382 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:08:10.382 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:10.382 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:08:10.382 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61527 00:08:10.382 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:10.382 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61527 00:08:10.382 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:08:10.382 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:10.382 04:48:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:10.382 04:48:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61527 00:08:10.382 04:48:33 -- common/autotest_common.sh@936 -- # '[' -z 61527 ']' 00:08:10.382 04:48:33 -- common/autotest_common.sh@940 -- # kill -0 61527 00:08:10.382 04:48:33 -- common/autotest_common.sh@941 -- # uname 00:08:10.382 04:48:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:10.382 04:48:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61527 00:08:10.382 04:48:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:10.382 04:48:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:10.382 killing process with pid 61527 00:08:10.382 04:48:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61527' 00:08:10.382 04:48:33 -- common/autotest_common.sh@955 -- # kill 61527 00:08:10.382 04:48:33 -- common/autotest_common.sh@960 -- # wait 61527 00:08:12.288 ************************************ 00:08:12.288 END TEST dpdk_mem_utility 00:08:12.288 ************************************ 00:08:12.288 00:08:12.288 real 0m4.059s 00:08:12.288 user 0m4.315s 00:08:12.288 sys 0m0.602s 00:08:12.288 04:48:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.288 04:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:12.288 04:48:35 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:12.288 04:48:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.288 04:48:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.288 04:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:12.288 ************************************ 00:08:12.288 START TEST event 00:08:12.288 ************************************ 00:08:12.288 04:48:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:12.288 * Looking for test storage... 00:08:12.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:12.288 04:48:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:12.288 04:48:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:12.288 04:48:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:12.288 04:48:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:12.288 04:48:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:12.288 04:48:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:12.288 04:48:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:12.288 04:48:35 -- scripts/common.sh@335 -- # IFS=.-: 00:08:12.288 04:48:35 -- scripts/common.sh@335 -- # read -ra ver1 00:08:12.288 04:48:35 -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.288 04:48:35 -- scripts/common.sh@336 -- # read -ra ver2 00:08:12.288 04:48:35 -- scripts/common.sh@337 -- # local 'op=<' 00:08:12.288 04:48:35 -- scripts/common.sh@339 -- # ver1_l=2 00:08:12.288 04:48:35 -- scripts/common.sh@340 -- # ver2_l=1 00:08:12.288 04:48:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:12.288 04:48:35 -- scripts/common.sh@343 -- # case "$op" in 00:08:12.288 04:48:35 -- scripts/common.sh@344 -- # : 1 00:08:12.288 04:48:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:12.288 04:48:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.288 04:48:35 -- scripts/common.sh@364 -- # decimal 1 00:08:12.288 04:48:35 -- scripts/common.sh@352 -- # local d=1 00:08:12.288 04:48:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.288 04:48:35 -- scripts/common.sh@354 -- # echo 1 00:08:12.288 04:48:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:12.288 04:48:35 -- scripts/common.sh@365 -- # decimal 2 00:08:12.288 04:48:35 -- scripts/common.sh@352 -- # local d=2 00:08:12.288 04:48:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.288 04:48:35 -- scripts/common.sh@354 -- # echo 2 00:08:12.288 04:48:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:12.288 04:48:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:12.288 04:48:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:12.288 04:48:35 -- scripts/common.sh@367 -- # return 0 00:08:12.288 04:48:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.288 04:48:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:12.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.288 --rc genhtml_branch_coverage=1 00:08:12.288 --rc genhtml_function_coverage=1 00:08:12.288 --rc genhtml_legend=1 00:08:12.288 --rc geninfo_all_blocks=1 00:08:12.288 --rc geninfo_unexecuted_blocks=1 00:08:12.289 00:08:12.289 ' 00:08:12.289 04:48:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:12.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.289 --rc genhtml_branch_coverage=1 00:08:12.289 --rc genhtml_function_coverage=1 00:08:12.289 --rc genhtml_legend=1 00:08:12.289 --rc geninfo_all_blocks=1 00:08:12.289 --rc geninfo_unexecuted_blocks=1 00:08:12.289 00:08:12.289 ' 00:08:12.289 04:48:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:12.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.289 --rc genhtml_branch_coverage=1 00:08:12.289 --rc genhtml_function_coverage=1 00:08:12.289 --rc genhtml_legend=1 00:08:12.289 --rc geninfo_all_blocks=1 00:08:12.289 --rc geninfo_unexecuted_blocks=1 00:08:12.289 00:08:12.289 ' 00:08:12.289 04:48:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:12.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.289 --rc genhtml_branch_coverage=1 00:08:12.289 --rc genhtml_function_coverage=1 00:08:12.289 --rc genhtml_legend=1 00:08:12.289 --rc geninfo_all_blocks=1 00:08:12.289 --rc geninfo_unexecuted_blocks=1 00:08:12.289 00:08:12.289 ' 00:08:12.289 04:48:35 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:12.289 04:48:35 -- bdev/nbd_common.sh@6 -- # set -e 00:08:12.289 04:48:35 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:12.289 04:48:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:12.289 04:48:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.289 04:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:12.548 ************************************ 00:08:12.548 START TEST event_perf 00:08:12.548 ************************************ 00:08:12.548 04:48:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:12.548 Running I/O for 1 seconds...[2024-11-18 04:48:35.849590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.548 [2024-11-18 04:48:35.849839] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61636 ] 00:08:12.548 [2024-11-18 04:48:36.021286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.807 [2024-11-18 04:48:36.192519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.807 [2024-11-18 04:48:36.192650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.807 Running I/O for 1 seconds...[2024-11-18 04:48:36.193605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.807 [2024-11-18 04:48:36.193610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.185 00:08:14.185 lcore 0: 200441 00:08:14.185 lcore 1: 200440 00:08:14.185 lcore 2: 200440 00:08:14.185 lcore 3: 200441 00:08:14.185 done. 00:08:14.185 00:08:14.185 real 0m1.756s 00:08:14.185 user 0m4.543s 00:08:14.185 sys 0m0.116s 00:08:14.185 04:48:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.185 04:48:37 -- common/autotest_common.sh@10 -- # set +x 00:08:14.185 ************************************ 00:08:14.185 END TEST event_perf 00:08:14.185 ************************************ 00:08:14.185 04:48:37 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:14.185 04:48:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:14.185 04:48:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.185 04:48:37 -- common/autotest_common.sh@10 -- # set +x 00:08:14.185 ************************************ 00:08:14.185 START TEST event_reactor 00:08:14.185 ************************************ 00:08:14.185 04:48:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:14.185 [2024-11-18 04:48:37.650710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:14.185 [2024-11-18 04:48:37.650831] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:08:14.445 [2024-11-18 04:48:37.804876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.704 [2024-11-18 04:48:37.971904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.082 test_start 00:08:16.082 oneshot 00:08:16.082 tick 100 00:08:16.082 tick 100 00:08:16.082 tick 250 00:08:16.082 tick 100 00:08:16.082 tick 100 00:08:16.082 tick 100 00:08:16.082 tick 250 00:08:16.082 tick 500 00:08:16.082 tick 100 00:08:16.082 tick 100 00:08:16.082 tick 250 00:08:16.082 tick 100 00:08:16.082 tick 100 00:08:16.082 test_end 00:08:16.082 00:08:16.082 real 0m1.717s 00:08:16.082 user 0m1.521s 00:08:16.082 sys 0m0.095s 00:08:16.082 04:48:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.082 ************************************ 00:08:16.082 04:48:39 -- common/autotest_common.sh@10 -- # set +x 00:08:16.082 END TEST event_reactor 00:08:16.082 ************************************ 00:08:16.082 04:48:39 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:16.082 04:48:39 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:16.082 04:48:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.082 04:48:39 -- common/autotest_common.sh@10 -- # set +x 00:08:16.082 ************************************ 00:08:16.082 START TEST event_reactor_perf 00:08:16.082 ************************************ 00:08:16.082 04:48:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:16.082 [2024-11-18 04:48:39.430707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.082 [2024-11-18 04:48:39.430967] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61712 ] 00:08:16.342 [2024-11-18 04:48:39.608999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.342 [2024-11-18 04:48:39.771856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.719 test_start 00:08:17.719 test_end 00:08:17.719 Performance: 312498 events per second 00:08:17.719 00:08:17.719 real 0m1.753s 00:08:17.719 user 0m1.550s 00:08:17.719 sys 0m0.101s 00:08:17.719 04:48:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.719 04:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.719 ************************************ 00:08:17.719 END TEST event_reactor_perf 00:08:17.719 ************************************ 00:08:17.719 04:48:41 -- event/event.sh@49 -- # uname -s 00:08:17.719 04:48:41 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:17.719 04:48:41 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:17.719 04:48:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.719 04:48:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.719 04:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.719 ************************************ 00:08:17.719 START TEST event_scheduler 00:08:17.719 ************************************ 00:08:17.719 04:48:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:17.978 * Looking for test storage... 00:08:17.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:17.978 04:48:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.978 04:48:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.978 04:48:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.978 04:48:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.978 04:48:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.978 04:48:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.978 04:48:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.978 04:48:41 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.978 04:48:41 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.978 04:48:41 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.978 04:48:41 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.978 04:48:41 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.978 04:48:41 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.978 04:48:41 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.978 04:48:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.978 04:48:41 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.978 04:48:41 -- scripts/common.sh@344 -- # : 1 00:08:17.978 04:48:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.978 04:48:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.978 04:48:41 -- scripts/common.sh@364 -- # decimal 1 00:08:17.978 04:48:41 -- scripts/common.sh@352 -- # local d=1 00:08:17.978 04:48:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.978 04:48:41 -- scripts/common.sh@354 -- # echo 1 00:08:17.978 04:48:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.978 04:48:41 -- scripts/common.sh@365 -- # decimal 2 00:08:17.978 04:48:41 -- scripts/common.sh@352 -- # local d=2 00:08:17.978 04:48:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.979 04:48:41 -- scripts/common.sh@354 -- # echo 2 00:08:17.979 04:48:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.979 04:48:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.979 04:48:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.979 04:48:41 -- scripts/common.sh@367 -- # return 0 00:08:17.979 04:48:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.979 04:48:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.979 --rc genhtml_branch_coverage=1 00:08:17.979 --rc genhtml_function_coverage=1 00:08:17.979 --rc genhtml_legend=1 00:08:17.979 --rc geninfo_all_blocks=1 00:08:17.979 --rc geninfo_unexecuted_blocks=1 00:08:17.979 00:08:17.979 ' 00:08:17.979 04:48:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.979 --rc genhtml_branch_coverage=1 00:08:17.979 --rc genhtml_function_coverage=1 00:08:17.979 --rc genhtml_legend=1 00:08:17.979 --rc geninfo_all_blocks=1 00:08:17.979 --rc geninfo_unexecuted_blocks=1 00:08:17.979 00:08:17.979 ' 00:08:17.979 04:48:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.979 --rc genhtml_branch_coverage=1 00:08:17.979 --rc genhtml_function_coverage=1 00:08:17.979 --rc genhtml_legend=1 00:08:17.979 --rc geninfo_all_blocks=1 00:08:17.979 --rc geninfo_unexecuted_blocks=1 00:08:17.979 00:08:17.979 ' 00:08:17.979 04:48:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.979 --rc genhtml_branch_coverage=1 00:08:17.979 --rc genhtml_function_coverage=1 00:08:17.979 --rc genhtml_legend=1 00:08:17.979 --rc geninfo_all_blocks=1 00:08:17.979 --rc geninfo_unexecuted_blocks=1 00:08:17.979 00:08:17.979 ' 00:08:17.979 04:48:41 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:17.979 04:48:41 -- scheduler/scheduler.sh@35 -- # scheduler_pid=61787 00:08:17.979 04:48:41 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:17.979 04:48:41 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.979 04:48:41 -- scheduler/scheduler.sh@37 -- # waitforlisten 61787 00:08:17.979 04:48:41 -- common/autotest_common.sh@829 -- # '[' -z 61787 ']' 00:08:17.979 04:48:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.979 04:48:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.979 04:48:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.979 04:48:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.979 04:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.979 [2024-11-18 04:48:41.454980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.979 [2024-11-18 04:48:41.455157] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61787 ] 00:08:18.238 [2024-11-18 04:48:41.634176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.495 [2024-11-18 04:48:41.885857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.495 [2024-11-18 04:48:41.885986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.495 [2024-11-18 04:48:41.886115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.495 [2024-11-18 04:48:41.886629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.061 04:48:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.061 04:48:42 -- common/autotest_common.sh@862 -- # return 0 00:08:19.061 04:48:42 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:19.061 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.061 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.061 POWER: Env isn't set yet! 00:08:19.061 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:19.061 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:19.061 POWER: Cannot set governor of lcore 0 to userspace 00:08:19.061 POWER: Attempting to initialise PSTAT power management... 00:08:19.061 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:19.061 POWER: Cannot set governor of lcore 0 to performance 00:08:19.061 POWER: Attempting to initialise AMD PSTATE power management... 00:08:19.061 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:19.061 POWER: Cannot set governor of lcore 0 to userspace 00:08:19.061 POWER: Attempting to initialise CPPC power management... 00:08:19.061 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:19.061 POWER: Cannot set governor of lcore 0 to userspace 00:08:19.061 POWER: Attempting to initialise VM power management... 00:08:19.061 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:19.061 POWER: Unable to set Power Management Environment for lcore 0 00:08:19.061 [2024-11-18 04:48:42.396126] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:19.061 [2024-11-18 04:48:42.396149] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:19.061 [2024-11-18 04:48:42.396163] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:19.061 [2024-11-18 04:48:42.396235] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:19.061 [2024-11-18 04:48:42.396257] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:19.061 [2024-11-18 04:48:42.396269] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:19.061 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.061 04:48:42 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:19.061 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.061 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 [2024-11-18 04:48:42.700265] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:19.321 04:48:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.321 04:48:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 ************************************ 00:08:19.321 START TEST scheduler_create_thread 00:08:19.321 ************************************ 00:08:19.321 04:48:42 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 2 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 3 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 4 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 5 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 6 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 7 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 8 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 9 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 10 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 04:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 04:48:42 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:19.321 04:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:20.698 04:48:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.698 04:48:43 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:20.698 04:48:43 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:20.698 04:48:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.698 04:48:43 -- common/autotest_common.sh@10 -- # set +x 00:08:21.634 04:48:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.634 00:08:21.634 real 0m2.141s 00:08:21.634 user 0m0.016s 00:08:21.634 sys 0m0.010s 00:08:21.634 04:48:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.634 04:48:44 -- common/autotest_common.sh@10 -- # set +x 00:08:21.634 ************************************ 00:08:21.634 END TEST scheduler_create_thread 00:08:21.634 ************************************ 00:08:21.634 04:48:44 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:21.634 04:48:44 -- scheduler/scheduler.sh@46 -- # killprocess 61787 00:08:21.634 04:48:44 -- common/autotest_common.sh@936 -- # '[' -z 61787 ']' 00:08:21.634 04:48:44 -- common/autotest_common.sh@940 -- # kill -0 61787 00:08:21.634 04:48:44 -- common/autotest_common.sh@941 -- # uname 00:08:21.634 04:48:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.634 04:48:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61787 00:08:21.634 04:48:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:08:21.634 04:48:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:08:21.634 killing process with pid 61787 00:08:21.634 04:48:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61787' 00:08:21.634 04:48:44 -- common/autotest_common.sh@955 -- # kill 61787 00:08:21.634 04:48:44 -- common/autotest_common.sh@960 -- # wait 61787 00:08:21.893 [2024-11-18 04:48:45.333913] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:23.301 00:08:23.301 real 0m5.390s 00:08:23.301 user 0m8.734s 00:08:23.301 sys 0m0.515s 00:08:23.301 04:48:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.301 ************************************ 00:08:23.301 04:48:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.301 END TEST event_scheduler 00:08:23.301 ************************************ 00:08:23.301 04:48:46 -- event/event.sh@51 -- # modprobe -n nbd 00:08:23.301 04:48:46 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:23.301 04:48:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:23.301 04:48:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.301 04:48:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.301 ************************************ 00:08:23.301 START TEST app_repeat 00:08:23.301 ************************************ 00:08:23.301 04:48:46 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:08:23.301 04:48:46 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.301 04:48:46 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.301 04:48:46 -- event/event.sh@13 -- # local nbd_list 00:08:23.301 04:48:46 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:23.301 04:48:46 -- event/event.sh@14 -- # local bdev_list 00:08:23.301 04:48:46 -- event/event.sh@15 -- # local repeat_times=4 00:08:23.301 04:48:46 -- event/event.sh@17 -- # modprobe nbd 00:08:23.301 Process app_repeat pid: 61893 00:08:23.301 spdk_app_start Round 0 00:08:23.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.301 04:48:46 -- event/event.sh@19 -- # repeat_pid=61893 00:08:23.301 04:48:46 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:23.301 04:48:46 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61893' 00:08:23.301 04:48:46 -- event/event.sh@23 -- # for i in {0..2} 00:08:23.301 04:48:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:23.301 04:48:46 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:23.301 04:48:46 -- event/event.sh@25 -- # waitforlisten 61893 /var/tmp/spdk-nbd.sock 00:08:23.301 04:48:46 -- common/autotest_common.sh@829 -- # '[' -z 61893 ']' 00:08:23.301 04:48:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.301 04:48:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.301 04:48:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.301 04:48:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.301 04:48:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.301 [2024-11-18 04:48:46.701424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.301 [2024-11-18 04:48:46.701636] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61893 ] 00:08:23.560 [2024-11-18 04:48:46.876429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:23.818 [2024-11-18 04:48:47.109280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.818 [2024-11-18 04:48:47.109281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.385 04:48:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.385 04:48:47 -- common/autotest_common.sh@862 -- # return 0 00:08:24.385 04:48:47 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.643 Malloc0 00:08:24.643 04:48:48 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.902 Malloc1 00:08:24.902 04:48:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@12 -- # local i 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.902 04:48:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:25.161 /dev/nbd0 00:08:25.161 04:48:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:25.161 04:48:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:25.161 04:48:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:25.161 04:48:48 -- common/autotest_common.sh@867 -- # local i 00:08:25.161 04:48:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.161 04:48:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.161 04:48:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:25.161 04:48:48 -- common/autotest_common.sh@871 -- # break 00:08:25.421 04:48:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.421 04:48:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.421 04:48:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.421 1+0 records in 00:08:25.421 1+0 records out 00:08:25.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313305 s, 13.1 MB/s 00:08:25.421 04:48:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.421 04:48:48 -- common/autotest_common.sh@884 -- # size=4096 00:08:25.421 04:48:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.421 04:48:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.421 04:48:48 -- common/autotest_common.sh@887 -- # return 0 00:08:25.421 04:48:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.421 04:48:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.421 04:48:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:25.421 /dev/nbd1 00:08:25.421 04:48:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:25.421 04:48:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:25.421 04:48:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:25.421 04:48:48 -- common/autotest_common.sh@867 -- # local i 00:08:25.421 04:48:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.421 04:48:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.421 04:48:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:25.679 04:48:48 -- common/autotest_common.sh@871 -- # break 00:08:25.679 04:48:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.679 04:48:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.679 04:48:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.679 1+0 records in 00:08:25.679 1+0 records out 00:08:25.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315697 s, 13.0 MB/s 00:08:25.679 04:48:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.679 04:48:48 -- common/autotest_common.sh@884 -- # size=4096 00:08:25.679 04:48:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.679 04:48:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.679 04:48:48 -- common/autotest_common.sh@887 -- # return 0 00:08:25.679 04:48:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.679 04:48:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.679 04:48:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.679 04:48:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.679 04:48:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:25.936 { 00:08:25.936 "nbd_device": "/dev/nbd0", 00:08:25.936 "bdev_name": "Malloc0" 00:08:25.936 }, 00:08:25.936 { 00:08:25.936 "nbd_device": "/dev/nbd1", 00:08:25.936 "bdev_name": "Malloc1" 00:08:25.936 } 00:08:25.936 ]' 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:25.936 { 00:08:25.936 "nbd_device": "/dev/nbd0", 00:08:25.936 "bdev_name": "Malloc0" 00:08:25.936 }, 00:08:25.936 { 00:08:25.936 "nbd_device": "/dev/nbd1", 00:08:25.936 "bdev_name": "Malloc1" 00:08:25.936 } 00:08:25.936 ]' 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:25.936 /dev/nbd1' 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:25.936 /dev/nbd1' 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@65 -- # count=2 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@95 -- # count=2 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:25.936 04:48:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:25.936 256+0 records in 00:08:25.936 256+0 records out 00:08:25.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0072334 s, 145 MB/s 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:25.937 256+0 records in 00:08:25.937 256+0 records out 00:08:25.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257379 s, 40.7 MB/s 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:25.937 256+0 records in 00:08:25.937 256+0 records out 00:08:25.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034777 s, 30.2 MB/s 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@51 -- # local i 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.937 04:48:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:26.195 04:48:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:26.195 04:48:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:26.195 04:48:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:26.195 04:48:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.195 04:48:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.195 04:48:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:26.195 04:48:49 -- bdev/nbd_common.sh@41 -- # break 00:08:26.195 04:48:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.195 04:48:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.195 04:48:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@41 -- # break 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.455 04:48:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@65 -- # true 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@104 -- # count=0 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:26.713 04:48:50 -- bdev/nbd_common.sh@109 -- # return 0 00:08:26.713 04:48:50 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:27.280 04:48:50 -- event/event.sh@35 -- # sleep 3 00:08:28.656 [2024-11-18 04:48:51.738009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.656 [2024-11-18 04:48:51.911480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.656 [2024-11-18 04:48:51.911483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.656 [2024-11-18 04:48:52.070914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:28.656 [2024-11-18 04:48:52.071000] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:30.559 04:48:53 -- event/event.sh@23 -- # for i in {0..2} 00:08:30.559 spdk_app_start Round 1 00:08:30.559 04:48:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:30.559 04:48:53 -- event/event.sh@25 -- # waitforlisten 61893 /var/tmp/spdk-nbd.sock 00:08:30.559 04:48:53 -- common/autotest_common.sh@829 -- # '[' -z 61893 ']' 00:08:30.559 04:48:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:30.559 04:48:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:30.559 04:48:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:30.559 04:48:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.559 04:48:53 -- common/autotest_common.sh@10 -- # set +x 00:08:30.559 04:48:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.560 04:48:53 -- common/autotest_common.sh@862 -- # return 0 00:08:30.560 04:48:53 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:30.560 Malloc0 00:08:30.560 04:48:54 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:31.126 Malloc1 00:08:31.126 04:48:54 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@12 -- # local i 00:08:31.126 04:48:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:31.127 04:48:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.127 04:48:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:31.127 /dev/nbd0 00:08:31.127 04:48:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:31.127 04:48:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:31.127 04:48:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:31.127 04:48:54 -- common/autotest_common.sh@867 -- # local i 00:08:31.127 04:48:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:31.127 04:48:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:31.127 04:48:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:31.127 04:48:54 -- common/autotest_common.sh@871 -- # break 00:08:31.127 04:48:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:31.127 04:48:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:31.127 04:48:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.385 1+0 records in 00:08:31.385 1+0 records out 00:08:31.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267471 s, 15.3 MB/s 00:08:31.385 04:48:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.385 04:48:54 -- common/autotest_common.sh@884 -- # size=4096 00:08:31.385 04:48:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.385 04:48:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:31.385 04:48:54 -- common/autotest_common.sh@887 -- # return 0 00:08:31.385 04:48:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.385 04:48:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.385 04:48:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:31.385 /dev/nbd1 00:08:31.385 04:48:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:31.385 04:48:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:31.385 04:48:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:31.385 04:48:54 -- common/autotest_common.sh@867 -- # local i 00:08:31.385 04:48:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:31.385 04:48:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:31.385 04:48:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:31.385 04:48:54 -- common/autotest_common.sh@871 -- # break 00:08:31.385 04:48:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:31.385 04:48:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:31.385 04:48:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.385 1+0 records in 00:08:31.385 1+0 records out 00:08:31.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235424 s, 17.4 MB/s 00:08:31.385 04:48:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.385 04:48:54 -- common/autotest_common.sh@884 -- # size=4096 00:08:31.385 04:48:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.385 04:48:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:31.385 04:48:54 -- common/autotest_common.sh@887 -- # return 0 00:08:31.385 04:48:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.385 04:48:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.385 04:48:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.385 04:48:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.385 04:48:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:31.644 { 00:08:31.644 "nbd_device": "/dev/nbd0", 00:08:31.644 "bdev_name": "Malloc0" 00:08:31.644 }, 00:08:31.644 { 00:08:31.644 "nbd_device": "/dev/nbd1", 00:08:31.644 "bdev_name": "Malloc1" 00:08:31.644 } 00:08:31.644 ]' 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:31.644 { 00:08:31.644 "nbd_device": "/dev/nbd0", 00:08:31.644 "bdev_name": "Malloc0" 00:08:31.644 }, 00:08:31.644 { 00:08:31.644 "nbd_device": "/dev/nbd1", 00:08:31.644 "bdev_name": "Malloc1" 00:08:31.644 } 00:08:31.644 ]' 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:31.644 /dev/nbd1' 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:31.644 /dev/nbd1' 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@65 -- # count=2 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@95 -- # count=2 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:31.644 256+0 records in 00:08:31.644 256+0 records out 00:08:31.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102473 s, 102 MB/s 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.644 04:48:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:31.903 256+0 records in 00:08:31.903 256+0 records out 00:08:31.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282559 s, 37.1 MB/s 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:31.903 256+0 records in 00:08:31.903 256+0 records out 00:08:31.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284986 s, 36.8 MB/s 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@51 -- # local i 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.903 04:48:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.162 04:48:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.162 04:48:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.162 04:48:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.162 04:48:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.162 04:48:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.162 04:48:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.162 04:48:55 -- bdev/nbd_common.sh@41 -- # break 00:08:32.162 04:48:55 -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.162 04:48:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.162 04:48:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@41 -- # break 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.420 04:48:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.679 04:48:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:32.679 04:48:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:32.679 04:48:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.679 04:48:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:32.679 04:48:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:32.679 04:48:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.679 04:48:56 -- bdev/nbd_common.sh@65 -- # true 00:08:32.679 04:48:56 -- bdev/nbd_common.sh@65 -- # count=0 00:08:32.679 04:48:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:32.679 04:48:56 -- bdev/nbd_common.sh@104 -- # count=0 00:08:32.679 04:48:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:32.679 04:48:56 -- bdev/nbd_common.sh@109 -- # return 0 00:08:32.679 04:48:56 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:32.938 04:48:56 -- event/event.sh@35 -- # sleep 3 00:08:34.315 [2024-11-18 04:48:57.469625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.315 [2024-11-18 04:48:57.638239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.315 [2024-11-18 04:48:57.638260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.315 [2024-11-18 04:48:57.810446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:34.315 [2024-11-18 04:48:57.810574] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:36.256 spdk_app_start Round 2 00:08:36.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:36.256 04:48:59 -- event/event.sh@23 -- # for i in {0..2} 00:08:36.256 04:48:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:36.256 04:48:59 -- event/event.sh@25 -- # waitforlisten 61893 /var/tmp/spdk-nbd.sock 00:08:36.256 04:48:59 -- common/autotest_common.sh@829 -- # '[' -z 61893 ']' 00:08:36.256 04:48:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:36.256 04:48:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.256 04:48:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:36.256 04:48:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.256 04:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:36.256 04:48:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.256 04:48:59 -- common/autotest_common.sh@862 -- # return 0 00:08:36.256 04:48:59 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:36.515 Malloc0 00:08:36.515 04:48:59 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:36.773 Malloc1 00:08:36.773 04:49:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:36.773 04:49:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.773 04:49:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:36.773 04:49:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:36.773 04:49:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.773 04:49:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:36.773 04:49:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:36.773 04:49:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.773 04:49:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:36.773 04:49:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:36.773 04:49:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.774 04:49:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:36.774 04:49:00 -- bdev/nbd_common.sh@12 -- # local i 00:08:36.774 04:49:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:36.774 04:49:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:36.774 04:49:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:37.032 /dev/nbd0 00:08:37.032 04:49:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:37.032 04:49:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:37.032 04:49:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:37.032 04:49:00 -- common/autotest_common.sh@867 -- # local i 00:08:37.032 04:49:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:37.032 04:49:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:37.032 04:49:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:37.032 04:49:00 -- common/autotest_common.sh@871 -- # break 00:08:37.032 04:49:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:37.032 04:49:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:37.032 04:49:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:37.032 1+0 records in 00:08:37.032 1+0 records out 00:08:37.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285691 s, 14.3 MB/s 00:08:37.032 04:49:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.032 04:49:00 -- common/autotest_common.sh@884 -- # size=4096 00:08:37.032 04:49:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.032 04:49:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:37.032 04:49:00 -- common/autotest_common.sh@887 -- # return 0 00:08:37.032 04:49:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.032 04:49:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.032 04:49:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:37.291 /dev/nbd1 00:08:37.291 04:49:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:37.291 04:49:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:37.291 04:49:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:37.291 04:49:00 -- common/autotest_common.sh@867 -- # local i 00:08:37.291 04:49:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:37.291 04:49:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:37.291 04:49:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:37.291 04:49:00 -- common/autotest_common.sh@871 -- # break 00:08:37.291 04:49:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:37.291 04:49:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:37.291 04:49:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:37.291 1+0 records in 00:08:37.291 1+0 records out 00:08:37.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354294 s, 11.6 MB/s 00:08:37.291 04:49:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.291 04:49:00 -- common/autotest_common.sh@884 -- # size=4096 00:08:37.291 04:49:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.550 04:49:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:37.550 04:49:00 -- common/autotest_common.sh@887 -- # return 0 00:08:37.550 04:49:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.550 04:49:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.550 04:49:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:37.550 04:49:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.550 04:49:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:37.550 { 00:08:37.550 "nbd_device": "/dev/nbd0", 00:08:37.550 "bdev_name": "Malloc0" 00:08:37.550 }, 00:08:37.550 { 00:08:37.550 "nbd_device": "/dev/nbd1", 00:08:37.550 "bdev_name": "Malloc1" 00:08:37.550 } 00:08:37.550 ]' 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:37.550 { 00:08:37.550 "nbd_device": "/dev/nbd0", 00:08:37.550 "bdev_name": "Malloc0" 00:08:37.550 }, 00:08:37.550 { 00:08:37.550 "nbd_device": "/dev/nbd1", 00:08:37.550 "bdev_name": "Malloc1" 00:08:37.550 } 00:08:37.550 ]' 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:37.550 /dev/nbd1' 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:37.550 /dev/nbd1' 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@65 -- # count=2 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@95 -- # count=2 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:37.550 256+0 records in 00:08:37.550 256+0 records out 00:08:37.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00715381 s, 147 MB/s 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:37.550 04:49:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:37.809 256+0 records in 00:08:37.809 256+0 records out 00:08:37.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271299 s, 38.7 MB/s 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:37.809 256+0 records in 00:08:37.809 256+0 records out 00:08:37.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348067 s, 30.1 MB/s 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.809 04:49:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:37.810 04:49:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.810 04:49:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:37.810 04:49:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:37.810 04:49:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:37.810 04:49:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.810 04:49:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.810 04:49:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:37.810 04:49:01 -- bdev/nbd_common.sh@51 -- # local i 00:08:37.810 04:49:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.810 04:49:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:38.068 04:49:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:38.068 04:49:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:38.068 04:49:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:38.068 04:49:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.068 04:49:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.068 04:49:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:38.068 04:49:01 -- bdev/nbd_common.sh@41 -- # break 00:08:38.068 04:49:01 -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.068 04:49:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.068 04:49:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@41 -- # break 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.327 04:49:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@65 -- # true 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@65 -- # count=0 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@104 -- # count=0 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:38.586 04:49:02 -- bdev/nbd_common.sh@109 -- # return 0 00:08:38.586 04:49:02 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:39.154 04:49:02 -- event/event.sh@35 -- # sleep 3 00:08:40.091 [2024-11-18 04:49:03.510512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:40.350 [2024-11-18 04:49:03.682120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.350 [2024-11-18 04:49:03.682130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.350 [2024-11-18 04:49:03.846825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:40.350 [2024-11-18 04:49:03.846887] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:42.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:42.253 04:49:05 -- event/event.sh@38 -- # waitforlisten 61893 /var/tmp/spdk-nbd.sock 00:08:42.253 04:49:05 -- common/autotest_common.sh@829 -- # '[' -z 61893 ']' 00:08:42.253 04:49:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:42.253 04:49:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.253 04:49:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:42.253 04:49:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.253 04:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:42.253 04:49:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.253 04:49:05 -- common/autotest_common.sh@862 -- # return 0 00:08:42.253 04:49:05 -- event/event.sh@39 -- # killprocess 61893 00:08:42.253 04:49:05 -- common/autotest_common.sh@936 -- # '[' -z 61893 ']' 00:08:42.253 04:49:05 -- common/autotest_common.sh@940 -- # kill -0 61893 00:08:42.253 04:49:05 -- common/autotest_common.sh@941 -- # uname 00:08:42.253 04:49:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:42.253 04:49:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61893 00:08:42.253 killing process with pid 61893 00:08:42.253 04:49:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:42.253 04:49:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:42.253 04:49:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61893' 00:08:42.253 04:49:05 -- common/autotest_common.sh@955 -- # kill 61893 00:08:42.253 04:49:05 -- common/autotest_common.sh@960 -- # wait 61893 00:08:43.189 spdk_app_start is called in Round 0. 00:08:43.189 Shutdown signal received, stop current app iteration 00:08:43.189 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:08:43.189 spdk_app_start is called in Round 1. 00:08:43.189 Shutdown signal received, stop current app iteration 00:08:43.189 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:08:43.189 spdk_app_start is called in Round 2. 00:08:43.189 Shutdown signal received, stop current app iteration 00:08:43.189 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:08:43.189 spdk_app_start is called in Round 3. 00:08:43.189 Shutdown signal received, stop current app iteration 00:08:43.189 ************************************ 00:08:43.189 END TEST app_repeat 00:08:43.189 ************************************ 00:08:43.189 04:49:06 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:43.189 04:49:06 -- event/event.sh@42 -- # return 0 00:08:43.189 00:08:43.189 real 0m20.022s 00:08:43.189 user 0m43.126s 00:08:43.189 sys 0m2.736s 00:08:43.189 04:49:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.189 04:49:06 -- common/autotest_common.sh@10 -- # set +x 00:08:43.448 04:49:06 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:43.448 04:49:06 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:43.448 04:49:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.448 04:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.448 04:49:06 -- common/autotest_common.sh@10 -- # set +x 00:08:43.448 ************************************ 00:08:43.448 START TEST cpu_locks 00:08:43.448 ************************************ 00:08:43.448 04:49:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:43.448 * Looking for test storage... 00:08:43.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:43.448 04:49:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:43.448 04:49:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:43.448 04:49:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:43.448 04:49:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:43.448 04:49:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:43.448 04:49:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:43.448 04:49:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:43.448 04:49:06 -- scripts/common.sh@335 -- # IFS=.-: 00:08:43.448 04:49:06 -- scripts/common.sh@335 -- # read -ra ver1 00:08:43.448 04:49:06 -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.448 04:49:06 -- scripts/common.sh@336 -- # read -ra ver2 00:08:43.448 04:49:06 -- scripts/common.sh@337 -- # local 'op=<' 00:08:43.448 04:49:06 -- scripts/common.sh@339 -- # ver1_l=2 00:08:43.448 04:49:06 -- scripts/common.sh@340 -- # ver2_l=1 00:08:43.448 04:49:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:43.448 04:49:06 -- scripts/common.sh@343 -- # case "$op" in 00:08:43.448 04:49:06 -- scripts/common.sh@344 -- # : 1 00:08:43.448 04:49:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:43.448 04:49:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.448 04:49:06 -- scripts/common.sh@364 -- # decimal 1 00:08:43.448 04:49:06 -- scripts/common.sh@352 -- # local d=1 00:08:43.448 04:49:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.448 04:49:06 -- scripts/common.sh@354 -- # echo 1 00:08:43.448 04:49:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:43.448 04:49:06 -- scripts/common.sh@365 -- # decimal 2 00:08:43.448 04:49:06 -- scripts/common.sh@352 -- # local d=2 00:08:43.448 04:49:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.448 04:49:06 -- scripts/common.sh@354 -- # echo 2 00:08:43.448 04:49:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:43.448 04:49:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:43.448 04:49:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:43.448 04:49:06 -- scripts/common.sh@367 -- # return 0 00:08:43.448 04:49:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.448 04:49:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:43.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.449 --rc genhtml_branch_coverage=1 00:08:43.449 --rc genhtml_function_coverage=1 00:08:43.449 --rc genhtml_legend=1 00:08:43.449 --rc geninfo_all_blocks=1 00:08:43.449 --rc geninfo_unexecuted_blocks=1 00:08:43.449 00:08:43.449 ' 00:08:43.449 04:49:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:43.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.449 --rc genhtml_branch_coverage=1 00:08:43.449 --rc genhtml_function_coverage=1 00:08:43.449 --rc genhtml_legend=1 00:08:43.449 --rc geninfo_all_blocks=1 00:08:43.449 --rc geninfo_unexecuted_blocks=1 00:08:43.449 00:08:43.449 ' 00:08:43.449 04:49:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:43.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.449 --rc genhtml_branch_coverage=1 00:08:43.449 --rc genhtml_function_coverage=1 00:08:43.449 --rc genhtml_legend=1 00:08:43.449 --rc geninfo_all_blocks=1 00:08:43.449 --rc geninfo_unexecuted_blocks=1 00:08:43.449 00:08:43.449 ' 00:08:43.449 04:49:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:43.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.449 --rc genhtml_branch_coverage=1 00:08:43.449 --rc genhtml_function_coverage=1 00:08:43.449 --rc genhtml_legend=1 00:08:43.449 --rc geninfo_all_blocks=1 00:08:43.449 --rc geninfo_unexecuted_blocks=1 00:08:43.449 00:08:43.449 ' 00:08:43.449 04:49:06 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:43.449 04:49:06 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:43.449 04:49:06 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:43.449 04:49:06 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:43.449 04:49:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.449 04:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.449 04:49:06 -- common/autotest_common.sh@10 -- # set +x 00:08:43.449 ************************************ 00:08:43.449 START TEST default_locks 00:08:43.449 ************************************ 00:08:43.449 04:49:06 -- common/autotest_common.sh@1114 -- # default_locks 00:08:43.449 04:49:06 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62394 00:08:43.449 04:49:06 -- event/cpu_locks.sh@47 -- # waitforlisten 62394 00:08:43.449 04:49:06 -- common/autotest_common.sh@829 -- # '[' -z 62394 ']' 00:08:43.449 04:49:06 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:43.449 04:49:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.449 04:49:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.449 04:49:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.449 04:49:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.449 04:49:06 -- common/autotest_common.sh@10 -- # set +x 00:08:43.708 [2024-11-18 04:49:06.972363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.708 [2024-11-18 04:49:06.972571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62394 ] 00:08:43.708 [2024-11-18 04:49:07.143253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.966 [2024-11-18 04:49:07.305955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:43.966 [2024-11-18 04:49:07.306207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.344 04:49:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.344 04:49:08 -- common/autotest_common.sh@862 -- # return 0 00:08:45.344 04:49:08 -- event/cpu_locks.sh@49 -- # locks_exist 62394 00:08:45.344 04:49:08 -- event/cpu_locks.sh@22 -- # lslocks -p 62394 00:08:45.344 04:49:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:45.604 04:49:09 -- event/cpu_locks.sh@50 -- # killprocess 62394 00:08:45.604 04:49:09 -- common/autotest_common.sh@936 -- # '[' -z 62394 ']' 00:08:45.604 04:49:09 -- common/autotest_common.sh@940 -- # kill -0 62394 00:08:45.604 04:49:09 -- common/autotest_common.sh@941 -- # uname 00:08:45.604 04:49:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.604 04:49:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62394 00:08:45.604 04:49:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.604 04:49:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.604 04:49:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62394' 00:08:45.604 killing process with pid 62394 00:08:45.604 04:49:09 -- common/autotest_common.sh@955 -- # kill 62394 00:08:45.604 04:49:09 -- common/autotest_common.sh@960 -- # wait 62394 00:08:47.506 04:49:10 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62394 00:08:47.506 04:49:10 -- common/autotest_common.sh@650 -- # local es=0 00:08:47.506 04:49:10 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62394 00:08:47.506 04:49:10 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:47.506 04:49:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.506 04:49:10 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:47.506 04:49:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.506 04:49:10 -- common/autotest_common.sh@653 -- # waitforlisten 62394 00:08:47.506 04:49:10 -- common/autotest_common.sh@829 -- # '[' -z 62394 ']' 00:08:47.506 04:49:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.506 04:49:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.506 04:49:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.506 04:49:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.506 04:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.506 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62394) - No such process 00:08:47.506 ERROR: process (pid: 62394) is no longer running 00:08:47.506 04:49:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.506 04:49:10 -- common/autotest_common.sh@862 -- # return 1 00:08:47.506 04:49:10 -- common/autotest_common.sh@653 -- # es=1 00:08:47.506 04:49:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:47.506 04:49:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:47.506 04:49:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:47.506 04:49:10 -- event/cpu_locks.sh@54 -- # no_locks 00:08:47.506 04:49:10 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:47.506 04:49:10 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:47.506 04:49:10 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:47.506 00:08:47.506 real 0m4.040s 00:08:47.506 user 0m4.342s 00:08:47.506 sys 0m0.616s 00:08:47.506 04:49:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.506 04:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.506 ************************************ 00:08:47.506 END TEST default_locks 00:08:47.506 ************************************ 00:08:47.506 04:49:10 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:47.506 04:49:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:47.506 04:49:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.506 04:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.506 ************************************ 00:08:47.506 START TEST default_locks_via_rpc 00:08:47.506 ************************************ 00:08:47.506 04:49:10 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:08:47.506 04:49:10 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62471 00:08:47.506 04:49:10 -- event/cpu_locks.sh@63 -- # waitforlisten 62471 00:08:47.506 04:49:10 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:47.506 04:49:10 -- common/autotest_common.sh@829 -- # '[' -z 62471 ']' 00:08:47.506 04:49:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.506 04:49:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.506 04:49:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.506 04:49:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.506 04:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.765 [2024-11-18 04:49:11.056048] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:47.765 [2024-11-18 04:49:11.056268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62471 ] 00:08:47.765 [2024-11-18 04:49:11.225682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.024 [2024-11-18 04:49:11.390723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:48.024 [2024-11-18 04:49:11.390949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.400 04:49:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.400 04:49:12 -- common/autotest_common.sh@862 -- # return 0 00:08:49.400 04:49:12 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:49.400 04:49:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.400 04:49:12 -- common/autotest_common.sh@10 -- # set +x 00:08:49.400 04:49:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.400 04:49:12 -- event/cpu_locks.sh@67 -- # no_locks 00:08:49.400 04:49:12 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:49.400 04:49:12 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:49.400 04:49:12 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:49.400 04:49:12 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:49.400 04:49:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.400 04:49:12 -- common/autotest_common.sh@10 -- # set +x 00:08:49.400 04:49:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.400 04:49:12 -- event/cpu_locks.sh@71 -- # locks_exist 62471 00:08:49.400 04:49:12 -- event/cpu_locks.sh@22 -- # lslocks -p 62471 00:08:49.400 04:49:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:49.659 04:49:13 -- event/cpu_locks.sh@73 -- # killprocess 62471 00:08:49.659 04:49:13 -- common/autotest_common.sh@936 -- # '[' -z 62471 ']' 00:08:49.659 04:49:13 -- common/autotest_common.sh@940 -- # kill -0 62471 00:08:49.659 04:49:13 -- common/autotest_common.sh@941 -- # uname 00:08:49.659 04:49:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:49.659 04:49:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62471 00:08:49.659 04:49:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:49.659 04:49:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:49.659 killing process with pid 62471 00:08:49.659 04:49:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62471' 00:08:49.659 04:49:13 -- common/autotest_common.sh@955 -- # kill 62471 00:08:49.659 04:49:13 -- common/autotest_common.sh@960 -- # wait 62471 00:08:51.562 00:08:51.562 real 0m4.057s 00:08:51.562 user 0m4.306s 00:08:51.562 sys 0m0.630s 00:08:51.562 04:49:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.562 04:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:51.562 ************************************ 00:08:51.562 END TEST default_locks_via_rpc 00:08:51.562 ************************************ 00:08:51.820 04:49:15 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:51.820 04:49:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.820 04:49:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.820 04:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:51.820 ************************************ 00:08:51.820 START TEST non_locking_app_on_locked_coremask 00:08:51.820 ************************************ 00:08:51.820 04:49:15 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:08:51.820 04:49:15 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62542 00:08:51.820 04:49:15 -- event/cpu_locks.sh@81 -- # waitforlisten 62542 /var/tmp/spdk.sock 00:08:51.820 04:49:15 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:51.820 04:49:15 -- common/autotest_common.sh@829 -- # '[' -z 62542 ']' 00:08:51.820 04:49:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.820 04:49:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.820 04:49:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.820 04:49:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.820 04:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:51.820 [2024-11-18 04:49:15.177108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:51.820 [2024-11-18 04:49:15.177367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62542 ] 00:08:52.079 [2024-11-18 04:49:15.359231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.079 [2024-11-18 04:49:15.522868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.079 [2024-11-18 04:49:15.523122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.458 04:49:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.458 04:49:16 -- common/autotest_common.sh@862 -- # return 0 00:08:53.458 04:49:16 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62571 00:08:53.458 04:49:16 -- event/cpu_locks.sh@85 -- # waitforlisten 62571 /var/tmp/spdk2.sock 00:08:53.458 04:49:16 -- common/autotest_common.sh@829 -- # '[' -z 62571 ']' 00:08:53.458 04:49:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:53.458 04:49:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.458 04:49:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:53.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:53.458 04:49:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.458 04:49:16 -- common/autotest_common.sh@10 -- # set +x 00:08:53.458 04:49:16 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:53.458 [2024-11-18 04:49:16.900798] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.458 [2024-11-18 04:49:16.900972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62571 ] 00:08:53.716 [2024-11-18 04:49:17.078134] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:53.716 [2024-11-18 04:49:17.078200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.974 [2024-11-18 04:49:17.443285] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.974 [2024-11-18 04:49:17.443521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.887 04:49:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.887 04:49:19 -- common/autotest_common.sh@862 -- # return 0 00:08:55.887 04:49:19 -- event/cpu_locks.sh@87 -- # locks_exist 62542 00:08:55.887 04:49:19 -- event/cpu_locks.sh@22 -- # lslocks -p 62542 00:08:55.887 04:49:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:56.824 04:49:20 -- event/cpu_locks.sh@89 -- # killprocess 62542 00:08:56.824 04:49:20 -- common/autotest_common.sh@936 -- # '[' -z 62542 ']' 00:08:56.824 04:49:20 -- common/autotest_common.sh@940 -- # kill -0 62542 00:08:56.824 04:49:20 -- common/autotest_common.sh@941 -- # uname 00:08:56.824 04:49:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:56.824 04:49:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62542 00:08:56.824 04:49:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:56.824 04:49:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:56.824 killing process with pid 62542 00:08:56.824 04:49:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62542' 00:08:56.824 04:49:20 -- common/autotest_common.sh@955 -- # kill 62542 00:08:56.824 04:49:20 -- common/autotest_common.sh@960 -- # wait 62542 00:09:01.009 04:49:24 -- event/cpu_locks.sh@90 -- # killprocess 62571 00:09:01.009 04:49:24 -- common/autotest_common.sh@936 -- # '[' -z 62571 ']' 00:09:01.009 04:49:24 -- common/autotest_common.sh@940 -- # kill -0 62571 00:09:01.009 04:49:24 -- common/autotest_common.sh@941 -- # uname 00:09:01.009 04:49:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:01.009 04:49:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62571 00:09:01.009 04:49:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:01.009 04:49:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:01.009 killing process with pid 62571 00:09:01.009 04:49:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62571' 00:09:01.009 04:49:24 -- common/autotest_common.sh@955 -- # kill 62571 00:09:01.009 04:49:24 -- common/autotest_common.sh@960 -- # wait 62571 00:09:02.910 00:09:02.910 real 0m11.319s 00:09:02.910 user 0m12.262s 00:09:02.910 sys 0m1.310s 00:09:02.910 04:49:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.910 04:49:26 -- common/autotest_common.sh@10 -- # set +x 00:09:02.910 ************************************ 00:09:02.910 END TEST non_locking_app_on_locked_coremask 00:09:02.910 ************************************ 00:09:03.169 04:49:26 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:03.169 04:49:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.169 04:49:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.169 04:49:26 -- common/autotest_common.sh@10 -- # set +x 00:09:03.169 ************************************ 00:09:03.169 START TEST locking_app_on_unlocked_coremask 00:09:03.169 ************************************ 00:09:03.169 04:49:26 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:09:03.169 04:49:26 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62716 00:09:03.169 04:49:26 -- event/cpu_locks.sh@99 -- # waitforlisten 62716 /var/tmp/spdk.sock 00:09:03.169 04:49:26 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:03.169 04:49:26 -- common/autotest_common.sh@829 -- # '[' -z 62716 ']' 00:09:03.169 04:49:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.169 04:49:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.169 04:49:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.169 04:49:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.169 04:49:26 -- common/autotest_common.sh@10 -- # set +x 00:09:03.169 [2024-11-18 04:49:26.539444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:03.169 [2024-11-18 04:49:26.539609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62716 ] 00:09:03.427 [2024-11-18 04:49:26.712024] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:03.427 [2024-11-18 04:49:26.712094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.427 [2024-11-18 04:49:26.888621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:03.427 [2024-11-18 04:49:26.888864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.804 04:49:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.804 04:49:28 -- common/autotest_common.sh@862 -- # return 0 00:09:04.804 04:49:28 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62738 00:09:04.804 04:49:28 -- event/cpu_locks.sh@103 -- # waitforlisten 62738 /var/tmp/spdk2.sock 00:09:04.804 04:49:28 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:04.804 04:49:28 -- common/autotest_common.sh@829 -- # '[' -z 62738 ']' 00:09:04.804 04:49:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:04.804 04:49:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:04.804 04:49:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:04.804 04:49:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.804 04:49:28 -- common/autotest_common.sh@10 -- # set +x 00:09:04.804 [2024-11-18 04:49:28.250360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.804 [2024-11-18 04:49:28.250490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62738 ] 00:09:05.062 [2024-11-18 04:49:28.417846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.320 [2024-11-18 04:49:28.801932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:05.320 [2024-11-18 04:49:28.802168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.220 04:49:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.220 04:49:30 -- common/autotest_common.sh@862 -- # return 0 00:09:07.220 04:49:30 -- event/cpu_locks.sh@105 -- # locks_exist 62738 00:09:07.220 04:49:30 -- event/cpu_locks.sh@22 -- # lslocks -p 62738 00:09:07.220 04:49:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:08.156 04:49:31 -- event/cpu_locks.sh@107 -- # killprocess 62716 00:09:08.156 04:49:31 -- common/autotest_common.sh@936 -- # '[' -z 62716 ']' 00:09:08.156 04:49:31 -- common/autotest_common.sh@940 -- # kill -0 62716 00:09:08.156 04:49:31 -- common/autotest_common.sh@941 -- # uname 00:09:08.156 04:49:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:08.156 04:49:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62716 00:09:08.156 04:49:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:08.156 04:49:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:08.156 04:49:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62716' 00:09:08.156 killing process with pid 62716 00:09:08.156 04:49:31 -- common/autotest_common.sh@955 -- # kill 62716 00:09:08.156 04:49:31 -- common/autotest_common.sh@960 -- # wait 62716 00:09:12.388 04:49:35 -- event/cpu_locks.sh@108 -- # killprocess 62738 00:09:12.388 04:49:35 -- common/autotest_common.sh@936 -- # '[' -z 62738 ']' 00:09:12.388 04:49:35 -- common/autotest_common.sh@940 -- # kill -0 62738 00:09:12.388 04:49:35 -- common/autotest_common.sh@941 -- # uname 00:09:12.388 04:49:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:12.388 04:49:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62738 00:09:12.388 04:49:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:12.388 killing process with pid 62738 00:09:12.388 04:49:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:12.388 04:49:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62738' 00:09:12.388 04:49:35 -- common/autotest_common.sh@955 -- # kill 62738 00:09:12.388 04:49:35 -- common/autotest_common.sh@960 -- # wait 62738 00:09:14.291 00:09:14.291 real 0m11.055s 00:09:14.291 user 0m11.975s 00:09:14.291 sys 0m1.367s 00:09:14.291 04:49:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:14.291 04:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:14.291 ************************************ 00:09:14.291 END TEST locking_app_on_unlocked_coremask 00:09:14.291 ************************************ 00:09:14.291 04:49:37 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:14.291 04:49:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.291 04:49:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.291 04:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:14.291 ************************************ 00:09:14.291 START TEST locking_app_on_locked_coremask 00:09:14.291 ************************************ 00:09:14.291 04:49:37 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:09:14.291 04:49:37 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62873 00:09:14.292 04:49:37 -- event/cpu_locks.sh@116 -- # waitforlisten 62873 /var/tmp/spdk.sock 00:09:14.292 04:49:37 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:14.292 04:49:37 -- common/autotest_common.sh@829 -- # '[' -z 62873 ']' 00:09:14.292 04:49:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.292 04:49:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.292 04:49:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.292 04:49:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.292 04:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:14.292 [2024-11-18 04:49:37.654181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:14.292 [2024-11-18 04:49:37.654364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62873 ] 00:09:14.551 [2024-11-18 04:49:37.825117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.551 [2024-11-18 04:49:37.998122] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:14.551 [2024-11-18 04:49:37.998399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.928 04:49:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.929 04:49:39 -- common/autotest_common.sh@862 -- # return 0 00:09:15.929 04:49:39 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62902 00:09:15.929 04:49:39 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62902 /var/tmp/spdk2.sock 00:09:15.929 04:49:39 -- common/autotest_common.sh@650 -- # local es=0 00:09:15.929 04:49:39 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62902 /var/tmp/spdk2.sock 00:09:15.929 04:49:39 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:15.929 04:49:39 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:15.929 04:49:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.929 04:49:39 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:15.929 04:49:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.929 04:49:39 -- common/autotest_common.sh@653 -- # waitforlisten 62902 /var/tmp/spdk2.sock 00:09:15.929 04:49:39 -- common/autotest_common.sh@829 -- # '[' -z 62902 ']' 00:09:15.929 04:49:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:15.929 04:49:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:15.929 04:49:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:15.929 04:49:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.929 04:49:39 -- common/autotest_common.sh@10 -- # set +x 00:09:15.929 [2024-11-18 04:49:39.404310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:15.929 [2024-11-18 04:49:39.404477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62902 ] 00:09:16.192 [2024-11-18 04:49:39.579855] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62873 has claimed it. 00:09:16.192 [2024-11-18 04:49:39.579941] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:16.760 ERROR: process (pid: 62902) is no longer running 00:09:16.760 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62902) - No such process 00:09:16.760 04:49:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.760 04:49:40 -- common/autotest_common.sh@862 -- # return 1 00:09:16.760 04:49:40 -- common/autotest_common.sh@653 -- # es=1 00:09:16.760 04:49:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:16.760 04:49:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:16.760 04:49:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:16.760 04:49:40 -- event/cpu_locks.sh@122 -- # locks_exist 62873 00:09:16.760 04:49:40 -- event/cpu_locks.sh@22 -- # lslocks -p 62873 00:09:16.760 04:49:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:17.328 04:49:40 -- event/cpu_locks.sh@124 -- # killprocess 62873 00:09:17.328 04:49:40 -- common/autotest_common.sh@936 -- # '[' -z 62873 ']' 00:09:17.328 04:49:40 -- common/autotest_common.sh@940 -- # kill -0 62873 00:09:17.328 04:49:40 -- common/autotest_common.sh@941 -- # uname 00:09:17.328 04:49:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:17.328 04:49:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62873 00:09:17.328 04:49:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:17.328 04:49:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:17.328 killing process with pid 62873 00:09:17.328 04:49:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62873' 00:09:17.328 04:49:40 -- common/autotest_common.sh@955 -- # kill 62873 00:09:17.329 04:49:40 -- common/autotest_common.sh@960 -- # wait 62873 00:09:19.235 00:09:19.235 real 0m4.864s 00:09:19.235 user 0m5.409s 00:09:19.235 sys 0m0.828s 00:09:19.235 04:49:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.235 04:49:42 -- common/autotest_common.sh@10 -- # set +x 00:09:19.235 ************************************ 00:09:19.235 END TEST locking_app_on_locked_coremask 00:09:19.235 ************************************ 00:09:19.235 04:49:42 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:19.235 04:49:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:19.235 04:49:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.235 04:49:42 -- common/autotest_common.sh@10 -- # set +x 00:09:19.235 ************************************ 00:09:19.235 START TEST locking_overlapped_coremask 00:09:19.235 ************************************ 00:09:19.235 04:49:42 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:09:19.235 04:49:42 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62966 00:09:19.235 04:49:42 -- event/cpu_locks.sh@133 -- # waitforlisten 62966 /var/tmp/spdk.sock 00:09:19.235 04:49:42 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:19.235 04:49:42 -- common/autotest_common.sh@829 -- # '[' -z 62966 ']' 00:09:19.235 04:49:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.235 04:49:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.235 04:49:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.235 04:49:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.235 04:49:42 -- common/autotest_common.sh@10 -- # set +x 00:09:19.235 [2024-11-18 04:49:42.567011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:19.235 [2024-11-18 04:49:42.567262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62966 ] 00:09:19.235 [2024-11-18 04:49:42.739420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:19.494 [2024-11-18 04:49:42.905254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:19.494 [2024-11-18 04:49:42.905663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.494 [2024-11-18 04:49:42.906473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.494 [2024-11-18 04:49:42.906493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.872 04:49:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.872 04:49:44 -- common/autotest_common.sh@862 -- # return 0 00:09:20.872 04:49:44 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:20.872 04:49:44 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62992 00:09:20.872 04:49:44 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62992 /var/tmp/spdk2.sock 00:09:20.872 04:49:44 -- common/autotest_common.sh@650 -- # local es=0 00:09:20.872 04:49:44 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62992 /var/tmp/spdk2.sock 00:09:20.872 04:49:44 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:20.872 04:49:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.872 04:49:44 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:20.872 04:49:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.872 04:49:44 -- common/autotest_common.sh@653 -- # waitforlisten 62992 /var/tmp/spdk2.sock 00:09:20.872 04:49:44 -- common/autotest_common.sh@829 -- # '[' -z 62992 ']' 00:09:20.872 04:49:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:20.872 04:49:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.872 04:49:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:20.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:20.872 04:49:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.872 04:49:44 -- common/autotest_common.sh@10 -- # set +x 00:09:20.872 [2024-11-18 04:49:44.306811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:20.872 [2024-11-18 04:49:44.306948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62992 ] 00:09:21.130 [2024-11-18 04:49:44.479538] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62966 has claimed it. 00:09:21.130 [2024-11-18 04:49:44.479611] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:21.698 ERROR: process (pid: 62992) is no longer running 00:09:21.698 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62992) - No such process 00:09:21.698 04:49:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.698 04:49:45 -- common/autotest_common.sh@862 -- # return 1 00:09:21.698 04:49:45 -- common/autotest_common.sh@653 -- # es=1 00:09:21.698 04:49:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.698 04:49:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.698 04:49:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.698 04:49:45 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:21.698 04:49:45 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:21.698 04:49:45 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:21.698 04:49:45 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:21.698 04:49:45 -- event/cpu_locks.sh@141 -- # killprocess 62966 00:09:21.698 04:49:45 -- common/autotest_common.sh@936 -- # '[' -z 62966 ']' 00:09:21.698 04:49:45 -- common/autotest_common.sh@940 -- # kill -0 62966 00:09:21.698 04:49:45 -- common/autotest_common.sh@941 -- # uname 00:09:21.698 04:49:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:21.698 04:49:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62966 00:09:21.698 04:49:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:21.698 04:49:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:21.698 killing process with pid 62966 00:09:21.698 04:49:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62966' 00:09:21.698 04:49:45 -- common/autotest_common.sh@955 -- # kill 62966 00:09:21.698 04:49:45 -- common/autotest_common.sh@960 -- # wait 62966 00:09:23.601 00:09:23.601 real 0m4.536s 00:09:23.601 user 0m12.397s 00:09:23.601 sys 0m0.567s 00:09:23.601 04:49:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.601 04:49:47 -- common/autotest_common.sh@10 -- # set +x 00:09:23.601 ************************************ 00:09:23.601 END TEST locking_overlapped_coremask 00:09:23.601 ************************************ 00:09:23.601 04:49:47 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:23.601 04:49:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.601 04:49:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.601 04:49:47 -- common/autotest_common.sh@10 -- # set +x 00:09:23.601 ************************************ 00:09:23.601 START TEST locking_overlapped_coremask_via_rpc 00:09:23.601 ************************************ 00:09:23.601 04:49:47 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:09:23.601 04:49:47 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63056 00:09:23.601 04:49:47 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:23.601 04:49:47 -- event/cpu_locks.sh@149 -- # waitforlisten 63056 /var/tmp/spdk.sock 00:09:23.601 04:49:47 -- common/autotest_common.sh@829 -- # '[' -z 63056 ']' 00:09:23.601 04:49:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.601 04:49:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.601 04:49:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.601 04:49:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.601 04:49:47 -- common/autotest_common.sh@10 -- # set +x 00:09:23.860 [2024-11-18 04:49:47.160478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:23.860 [2024-11-18 04:49:47.160642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63056 ] 00:09:23.860 [2024-11-18 04:49:47.333647] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:23.860 [2024-11-18 04:49:47.333714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.118 [2024-11-18 04:49:47.508455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:24.118 [2024-11-18 04:49:47.508927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.118 [2024-11-18 04:49:47.509178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.118 [2024-11-18 04:49:47.509181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.495 04:49:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.495 04:49:48 -- common/autotest_common.sh@862 -- # return 0 00:09:25.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:25.495 04:49:48 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63081 00:09:25.495 04:49:48 -- event/cpu_locks.sh@153 -- # waitforlisten 63081 /var/tmp/spdk2.sock 00:09:25.495 04:49:48 -- common/autotest_common.sh@829 -- # '[' -z 63081 ']' 00:09:25.495 04:49:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:25.495 04:49:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.495 04:49:48 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:25.495 04:49:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:25.495 04:49:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.495 04:49:48 -- common/autotest_common.sh@10 -- # set +x 00:09:25.495 [2024-11-18 04:49:48.923261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:25.495 [2024-11-18 04:49:48.923442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63081 ] 00:09:25.754 [2024-11-18 04:49:49.102943] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:25.754 [2024-11-18 04:49:49.103012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.012 [2024-11-18 04:49:49.477027] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:26.012 [2024-11-18 04:49:49.477496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.012 [2024-11-18 04:49:49.477761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:26.012 [2024-11-18 04:49:49.477833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.966 04:49:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.966 04:49:51 -- common/autotest_common.sh@862 -- # return 0 00:09:27.966 04:49:51 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:27.966 04:49:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.966 04:49:51 -- common/autotest_common.sh@10 -- # set +x 00:09:27.966 04:49:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.966 04:49:51 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:27.966 04:49:51 -- common/autotest_common.sh@650 -- # local es=0 00:09:27.966 04:49:51 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:27.966 04:49:51 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:27.966 04:49:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.966 04:49:51 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:27.966 04:49:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.966 04:49:51 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:27.966 04:49:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.966 04:49:51 -- common/autotest_common.sh@10 -- # set +x 00:09:27.966 [2024-11-18 04:49:51.344458] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63056 has claimed it. 00:09:27.966 request: 00:09:27.966 { 00:09:27.966 "method": "framework_enable_cpumask_locks", 00:09:27.966 "req_id": 1 00:09:27.966 } 00:09:27.966 Got JSON-RPC error response 00:09:27.966 response: 00:09:27.966 { 00:09:27.966 "code": -32603, 00:09:27.966 "message": "Failed to claim CPU core: 2" 00:09:27.966 } 00:09:27.966 04:49:51 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:27.966 04:49:51 -- common/autotest_common.sh@653 -- # es=1 00:09:27.966 04:49:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:27.966 04:49:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:27.966 04:49:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:27.966 04:49:51 -- event/cpu_locks.sh@158 -- # waitforlisten 63056 /var/tmp/spdk.sock 00:09:27.966 04:49:51 -- common/autotest_common.sh@829 -- # '[' -z 63056 ']' 00:09:27.966 04:49:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.966 04:49:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.966 04:49:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.966 04:49:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.966 04:49:51 -- common/autotest_common.sh@10 -- # set +x 00:09:28.225 04:49:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.225 04:49:51 -- common/autotest_common.sh@862 -- # return 0 00:09:28.225 04:49:51 -- event/cpu_locks.sh@159 -- # waitforlisten 63081 /var/tmp/spdk2.sock 00:09:28.225 04:49:51 -- common/autotest_common.sh@829 -- # '[' -z 63081 ']' 00:09:28.225 04:49:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.225 04:49:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.225 04:49:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.225 04:49:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.225 04:49:51 -- common/autotest_common.sh@10 -- # set +x 00:09:28.485 04:49:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.485 04:49:51 -- common/autotest_common.sh@862 -- # return 0 00:09:28.485 04:49:51 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:28.485 04:49:51 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:28.485 04:49:51 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:28.485 04:49:51 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:28.485 00:09:28.485 real 0m4.745s 00:09:28.485 user 0m1.929s 00:09:28.485 sys 0m0.260s 00:09:28.485 04:49:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.485 04:49:51 -- common/autotest_common.sh@10 -- # set +x 00:09:28.485 ************************************ 00:09:28.485 END TEST locking_overlapped_coremask_via_rpc 00:09:28.485 ************************************ 00:09:28.485 04:49:51 -- event/cpu_locks.sh@174 -- # cleanup 00:09:28.485 04:49:51 -- event/cpu_locks.sh@15 -- # [[ -z 63056 ]] 00:09:28.485 04:49:51 -- event/cpu_locks.sh@15 -- # killprocess 63056 00:09:28.485 04:49:51 -- common/autotest_common.sh@936 -- # '[' -z 63056 ']' 00:09:28.485 04:49:51 -- common/autotest_common.sh@940 -- # kill -0 63056 00:09:28.485 04:49:51 -- common/autotest_common.sh@941 -- # uname 00:09:28.485 04:49:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:28.485 04:49:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63056 00:09:28.485 04:49:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:28.485 04:49:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:28.485 killing process with pid 63056 00:09:28.485 04:49:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63056' 00:09:28.485 04:49:51 -- common/autotest_common.sh@955 -- # kill 63056 00:09:28.485 04:49:51 -- common/autotest_common.sh@960 -- # wait 63056 00:09:31.022 04:49:54 -- event/cpu_locks.sh@16 -- # [[ -z 63081 ]] 00:09:31.022 04:49:54 -- event/cpu_locks.sh@16 -- # killprocess 63081 00:09:31.022 04:49:54 -- common/autotest_common.sh@936 -- # '[' -z 63081 ']' 00:09:31.022 04:49:54 -- common/autotest_common.sh@940 -- # kill -0 63081 00:09:31.022 04:49:54 -- common/autotest_common.sh@941 -- # uname 00:09:31.022 04:49:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:31.022 04:49:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63081 00:09:31.022 04:49:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:31.022 04:49:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:31.022 killing process with pid 63081 00:09:31.022 04:49:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63081' 00:09:31.022 04:49:54 -- common/autotest_common.sh@955 -- # kill 63081 00:09:31.022 04:49:54 -- common/autotest_common.sh@960 -- # wait 63081 00:09:32.927 04:49:56 -- event/cpu_locks.sh@18 -- # rm -f 00:09:32.927 04:49:56 -- event/cpu_locks.sh@1 -- # cleanup 00:09:32.927 04:49:56 -- event/cpu_locks.sh@15 -- # [[ -z 63056 ]] 00:09:32.927 04:49:56 -- event/cpu_locks.sh@15 -- # killprocess 63056 00:09:32.927 04:49:56 -- common/autotest_common.sh@936 -- # '[' -z 63056 ']' 00:09:32.927 04:49:56 -- common/autotest_common.sh@940 -- # kill -0 63056 00:09:32.927 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63056) - No such process 00:09:32.927 Process with pid 63056 is not found 00:09:32.927 04:49:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63056 is not found' 00:09:32.927 04:49:56 -- event/cpu_locks.sh@16 -- # [[ -z 63081 ]] 00:09:32.927 04:49:56 -- event/cpu_locks.sh@16 -- # killprocess 63081 00:09:32.927 04:49:56 -- common/autotest_common.sh@936 -- # '[' -z 63081 ']' 00:09:32.927 04:49:56 -- common/autotest_common.sh@940 -- # kill -0 63081 00:09:32.927 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63081) - No such process 00:09:32.927 Process with pid 63081 is not found 00:09:32.927 04:49:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63081 is not found' 00:09:32.927 04:49:56 -- event/cpu_locks.sh@18 -- # rm -f 00:09:32.927 00:09:32.927 real 0m49.295s 00:09:32.927 user 1m25.861s 00:09:32.927 sys 0m6.699s 00:09:32.927 04:49:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.927 04:49:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.927 ************************************ 00:09:32.927 END TEST cpu_locks 00:09:32.927 ************************************ 00:09:32.927 00:09:32.927 real 1m20.421s 00:09:32.927 user 2m25.544s 00:09:32.927 sys 0m10.528s 00:09:32.927 04:49:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.927 04:49:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.927 ************************************ 00:09:32.927 END TEST event 00:09:32.927 ************************************ 00:09:32.927 04:49:56 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:32.927 04:49:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:32.927 04:49:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.927 04:49:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.927 ************************************ 00:09:32.927 START TEST thread 00:09:32.927 ************************************ 00:09:32.927 04:49:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:32.927 * Looking for test storage... 00:09:32.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:32.927 04:49:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:32.927 04:49:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:32.927 04:49:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:32.927 04:49:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:32.927 04:49:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:32.927 04:49:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:32.927 04:49:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:32.927 04:49:56 -- scripts/common.sh@335 -- # IFS=.-: 00:09:32.927 04:49:56 -- scripts/common.sh@335 -- # read -ra ver1 00:09:32.927 04:49:56 -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.927 04:49:56 -- scripts/common.sh@336 -- # read -ra ver2 00:09:32.927 04:49:56 -- scripts/common.sh@337 -- # local 'op=<' 00:09:32.927 04:49:56 -- scripts/common.sh@339 -- # ver1_l=2 00:09:32.927 04:49:56 -- scripts/common.sh@340 -- # ver2_l=1 00:09:32.927 04:49:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:32.927 04:49:56 -- scripts/common.sh@343 -- # case "$op" in 00:09:32.927 04:49:56 -- scripts/common.sh@344 -- # : 1 00:09:32.927 04:49:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:32.927 04:49:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.927 04:49:56 -- scripts/common.sh@364 -- # decimal 1 00:09:32.927 04:49:56 -- scripts/common.sh@352 -- # local d=1 00:09:32.927 04:49:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.927 04:49:56 -- scripts/common.sh@354 -- # echo 1 00:09:32.927 04:49:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:32.927 04:49:56 -- scripts/common.sh@365 -- # decimal 2 00:09:32.927 04:49:56 -- scripts/common.sh@352 -- # local d=2 00:09:32.927 04:49:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.927 04:49:56 -- scripts/common.sh@354 -- # echo 2 00:09:32.927 04:49:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:32.927 04:49:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:32.927 04:49:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:32.927 04:49:56 -- scripts/common.sh@367 -- # return 0 00:09:32.927 04:49:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.927 04:49:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:32.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.927 --rc genhtml_branch_coverage=1 00:09:32.927 --rc genhtml_function_coverage=1 00:09:32.927 --rc genhtml_legend=1 00:09:32.927 --rc geninfo_all_blocks=1 00:09:32.927 --rc geninfo_unexecuted_blocks=1 00:09:32.927 00:09:32.927 ' 00:09:32.927 04:49:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:32.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.927 --rc genhtml_branch_coverage=1 00:09:32.927 --rc genhtml_function_coverage=1 00:09:32.927 --rc genhtml_legend=1 00:09:32.927 --rc geninfo_all_blocks=1 00:09:32.927 --rc geninfo_unexecuted_blocks=1 00:09:32.927 00:09:32.927 ' 00:09:32.927 04:49:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:32.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.927 --rc genhtml_branch_coverage=1 00:09:32.927 --rc genhtml_function_coverage=1 00:09:32.927 --rc genhtml_legend=1 00:09:32.927 --rc geninfo_all_blocks=1 00:09:32.927 --rc geninfo_unexecuted_blocks=1 00:09:32.927 00:09:32.927 ' 00:09:32.927 04:49:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:32.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.927 --rc genhtml_branch_coverage=1 00:09:32.927 --rc genhtml_function_coverage=1 00:09:32.927 --rc genhtml_legend=1 00:09:32.927 --rc geninfo_all_blocks=1 00:09:32.927 --rc geninfo_unexecuted_blocks=1 00:09:32.927 00:09:32.927 ' 00:09:32.927 04:49:56 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:32.927 04:49:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:32.927 04:49:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.927 04:49:56 -- common/autotest_common.sh@10 -- # set +x 00:09:32.927 ************************************ 00:09:32.927 START TEST thread_poller_perf 00:09:32.927 ************************************ 00:09:32.927 04:49:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:32.927 [2024-11-18 04:49:56.329878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:32.927 [2024-11-18 04:49:56.330069] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63267 ] 00:09:33.186 [2024-11-18 04:49:56.504284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.444 [2024-11-18 04:49:56.734404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.444 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:34.822 [2024-11-18T04:49:58.346Z] ====================================== 00:09:34.822 [2024-11-18T04:49:58.346Z] busy:2214989916 (cyc) 00:09:34.822 [2024-11-18T04:49:58.346Z] total_run_count: 321000 00:09:34.822 [2024-11-18T04:49:58.346Z] tsc_hz: 2200000000 (cyc) 00:09:34.822 [2024-11-18T04:49:58.346Z] ====================================== 00:09:34.822 [2024-11-18T04:49:58.346Z] poller_cost: 6900 (cyc), 3136 (nsec) 00:09:34.822 00:09:34.822 real 0m1.815s 00:09:34.822 user 0m1.599s 00:09:34.822 sys 0m0.114s 00:09:34.822 04:49:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.822 ************************************ 00:09:34.822 END TEST thread_poller_perf 00:09:34.822 ************************************ 00:09:34.822 04:49:58 -- common/autotest_common.sh@10 -- # set +x 00:09:34.822 04:49:58 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:34.822 04:49:58 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:34.822 04:49:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.822 04:49:58 -- common/autotest_common.sh@10 -- # set +x 00:09:34.822 ************************************ 00:09:34.822 START TEST thread_poller_perf 00:09:34.822 ************************************ 00:09:34.822 04:49:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:34.822 [2024-11-18 04:49:58.195101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:34.822 [2024-11-18 04:49:58.195262] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63309 ] 00:09:35.081 [2024-11-18 04:49:58.364513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.081 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:35.081 [2024-11-18 04:49:58.526919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.458 [2024-11-18T04:49:59.982Z] ====================================== 00:09:36.458 [2024-11-18T04:49:59.982Z] busy:2205140178 (cyc) 00:09:36.458 [2024-11-18T04:49:59.982Z] total_run_count: 3912000 00:09:36.458 [2024-11-18T04:49:59.982Z] tsc_hz: 2200000000 (cyc) 00:09:36.458 [2024-11-18T04:49:59.982Z] ====================================== 00:09:36.458 [2024-11-18T04:49:59.982Z] poller_cost: 563 (cyc), 255 (nsec) 00:09:36.458 00:09:36.458 real 0m1.735s 00:09:36.458 user 0m1.536s 00:09:36.458 sys 0m0.098s 00:09:36.458 04:49:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.458 ************************************ 00:09:36.458 04:49:59 -- common/autotest_common.sh@10 -- # set +x 00:09:36.458 END TEST thread_poller_perf 00:09:36.458 ************************************ 00:09:36.458 04:49:59 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:36.458 04:49:59 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:36.458 04:49:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.458 04:49:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.458 04:49:59 -- common/autotest_common.sh@10 -- # set +x 00:09:36.458 ************************************ 00:09:36.458 START TEST thread_spdk_lock 00:09:36.458 ************************************ 00:09:36.458 04:49:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:36.716 [2024-11-18 04:49:59.987462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:36.716 [2024-11-18 04:49:59.987606] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63345 ] 00:09:36.716 [2024-11-18 04:50:00.163437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:36.974 [2024-11-18 04:50:00.379071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.974 [2024-11-18 04:50:00.379086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.543 [2024-11-18 04:50:00.920817] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:37.543 [2024-11-18 04:50:00.920916] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:37.543 [2024-11-18 04:50:00.920955] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x5b88e18655c0 00:09:37.543 [2024-11-18 04:50:00.929407] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:37.543 [2024-11-18 04:50:00.929505] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:37.543 [2024-11-18 04:50:00.929540] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:37.801 Starting test contend 00:09:37.801 Worker Delay Wait us Hold us Total us 00:09:37.801 0 3 115630 200836 316466 00:09:37.801 1 5 58202 303476 361678 00:09:37.801 PASS test contend 00:09:37.801 Starting test hold_by_poller 00:09:37.801 PASS test hold_by_poller 00:09:37.801 Starting test hold_by_message 00:09:37.801 PASS test hold_by_message 00:09:37.801 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:37.801 100014 assertions passed 00:09:37.801 0 assertions failed 00:09:37.801 00:09:37.801 real 0m1.356s 00:09:37.801 user 0m1.701s 00:09:37.801 sys 0m0.107s 00:09:37.801 04:50:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.801 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:37.801 ************************************ 00:09:37.801 END TEST thread_spdk_lock 00:09:37.801 ************************************ 00:09:38.060 00:09:38.060 real 0m5.223s 00:09:38.060 user 0m4.974s 00:09:38.060 sys 0m0.494s 00:09:38.060 04:50:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.060 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:38.060 ************************************ 00:09:38.060 END TEST thread 00:09:38.060 ************************************ 00:09:38.060 04:50:01 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:38.060 04:50:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:38.060 04:50:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.060 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:38.061 ************************************ 00:09:38.061 START TEST accel 00:09:38.061 ************************************ 00:09:38.061 04:50:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:38.061 * Looking for test storage... 00:09:38.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:38.061 04:50:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:38.061 04:50:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:38.061 04:50:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:38.061 04:50:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:38.061 04:50:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:38.061 04:50:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:38.061 04:50:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:38.061 04:50:01 -- scripts/common.sh@335 -- # IFS=.-: 00:09:38.061 04:50:01 -- scripts/common.sh@335 -- # read -ra ver1 00:09:38.061 04:50:01 -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.061 04:50:01 -- scripts/common.sh@336 -- # read -ra ver2 00:09:38.061 04:50:01 -- scripts/common.sh@337 -- # local 'op=<' 00:09:38.061 04:50:01 -- scripts/common.sh@339 -- # ver1_l=2 00:09:38.061 04:50:01 -- scripts/common.sh@340 -- # ver2_l=1 00:09:38.061 04:50:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:38.061 04:50:01 -- scripts/common.sh@343 -- # case "$op" in 00:09:38.061 04:50:01 -- scripts/common.sh@344 -- # : 1 00:09:38.061 04:50:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:38.061 04:50:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.061 04:50:01 -- scripts/common.sh@364 -- # decimal 1 00:09:38.061 04:50:01 -- scripts/common.sh@352 -- # local d=1 00:09:38.061 04:50:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.061 04:50:01 -- scripts/common.sh@354 -- # echo 1 00:09:38.061 04:50:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:38.061 04:50:01 -- scripts/common.sh@365 -- # decimal 2 00:09:38.061 04:50:01 -- scripts/common.sh@352 -- # local d=2 00:09:38.061 04:50:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.061 04:50:01 -- scripts/common.sh@354 -- # echo 2 00:09:38.061 04:50:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:38.061 04:50:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:38.061 04:50:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:38.061 04:50:01 -- scripts/common.sh@367 -- # return 0 00:09:38.061 04:50:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.061 04:50:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.061 --rc genhtml_branch_coverage=1 00:09:38.061 --rc genhtml_function_coverage=1 00:09:38.061 --rc genhtml_legend=1 00:09:38.061 --rc geninfo_all_blocks=1 00:09:38.061 --rc geninfo_unexecuted_blocks=1 00:09:38.061 00:09:38.061 ' 00:09:38.061 04:50:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.061 --rc genhtml_branch_coverage=1 00:09:38.061 --rc genhtml_function_coverage=1 00:09:38.061 --rc genhtml_legend=1 00:09:38.061 --rc geninfo_all_blocks=1 00:09:38.061 --rc geninfo_unexecuted_blocks=1 00:09:38.061 00:09:38.061 ' 00:09:38.061 04:50:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.061 --rc genhtml_branch_coverage=1 00:09:38.061 --rc genhtml_function_coverage=1 00:09:38.061 --rc genhtml_legend=1 00:09:38.061 --rc geninfo_all_blocks=1 00:09:38.061 --rc geninfo_unexecuted_blocks=1 00:09:38.061 00:09:38.061 ' 00:09:38.061 04:50:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.061 --rc genhtml_branch_coverage=1 00:09:38.061 --rc genhtml_function_coverage=1 00:09:38.061 --rc genhtml_legend=1 00:09:38.061 --rc geninfo_all_blocks=1 00:09:38.061 --rc geninfo_unexecuted_blocks=1 00:09:38.061 00:09:38.061 ' 00:09:38.061 04:50:01 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:38.061 04:50:01 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:38.061 04:50:01 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:38.061 04:50:01 -- accel/accel.sh@59 -- # spdk_tgt_pid=63433 00:09:38.061 04:50:01 -- accel/accel.sh@60 -- # waitforlisten 63433 00:09:38.061 04:50:01 -- common/autotest_common.sh@829 -- # '[' -z 63433 ']' 00:09:38.061 04:50:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.061 04:50:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.061 04:50:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.061 04:50:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.061 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:38.061 04:50:01 -- accel/accel.sh@58 -- # build_accel_config 00:09:38.061 04:50:01 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:38.061 04:50:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:38.061 04:50:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:38.061 04:50:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:38.061 04:50:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:38.061 04:50:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:38.061 04:50:01 -- accel/accel.sh@41 -- # local IFS=, 00:09:38.061 04:50:01 -- accel/accel.sh@42 -- # jq -r . 00:09:38.319 [2024-11-18 04:50:01.622058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:38.319 [2024-11-18 04:50:01.622246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63433 ] 00:09:38.319 [2024-11-18 04:50:01.795547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.576 [2024-11-18 04:50:02.029374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:38.576 [2024-11-18 04:50:02.029670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.951 04:50:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.951 04:50:03 -- common/autotest_common.sh@862 -- # return 0 00:09:39.951 04:50:03 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:39.951 04:50:03 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:39.951 04:50:03 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:39.951 04:50:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.951 04:50:03 -- common/autotest_common.sh@10 -- # set +x 00:09:39.951 04:50:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # IFS== 00:09:39.951 04:50:03 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.951 04:50:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.951 04:50:03 -- accel/accel.sh@67 -- # killprocess 63433 00:09:39.951 04:50:03 -- common/autotest_common.sh@936 -- # '[' -z 63433 ']' 00:09:39.951 04:50:03 -- common/autotest_common.sh@940 -- # kill -0 63433 00:09:39.951 04:50:03 -- common/autotest_common.sh@941 -- # uname 00:09:39.951 04:50:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:39.951 04:50:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63433 00:09:39.951 killing process with pid 63433 00:09:39.951 04:50:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:39.952 04:50:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:39.952 04:50:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63433' 00:09:39.952 04:50:03 -- common/autotest_common.sh@955 -- # kill 63433 00:09:39.952 04:50:03 -- common/autotest_common.sh@960 -- # wait 63433 00:09:42.478 04:50:05 -- accel/accel.sh@68 -- # trap - ERR 00:09:42.478 04:50:05 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:42.478 04:50:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:42.478 04:50:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:42.478 04:50:05 -- common/autotest_common.sh@10 -- # set +x 00:09:42.478 04:50:05 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:09:42.478 04:50:05 -- accel/accel.sh@12 -- # build_accel_config 00:09:42.478 04:50:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:42.478 04:50:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:42.478 04:50:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:42.478 04:50:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:42.478 04:50:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:42.478 04:50:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:42.478 04:50:05 -- accel/accel.sh@41 -- # local IFS=, 00:09:42.478 04:50:05 -- accel/accel.sh@42 -- # jq -r . 00:09:42.478 04:50:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:42.478 04:50:05 -- common/autotest_common.sh@10 -- # set +x 00:09:42.478 04:50:05 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:42.478 04:50:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:42.478 04:50:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:42.478 04:50:05 -- common/autotest_common.sh@10 -- # set +x 00:09:42.478 ************************************ 00:09:42.478 START TEST accel_missing_filename 00:09:42.478 ************************************ 00:09:42.478 04:50:05 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:09:42.478 04:50:05 -- common/autotest_common.sh@650 -- # local es=0 00:09:42.478 04:50:05 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:42.478 04:50:05 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:42.478 04:50:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.478 04:50:05 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:42.478 04:50:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.478 04:50:05 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:09:42.478 04:50:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:42.478 04:50:05 -- accel/accel.sh@12 -- # build_accel_config 00:09:42.478 04:50:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:42.478 04:50:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:42.478 04:50:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:42.478 04:50:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:42.478 04:50:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:42.478 04:50:05 -- accel/accel.sh@41 -- # local IFS=, 00:09:42.478 04:50:05 -- accel/accel.sh@42 -- # jq -r . 00:09:42.478 [2024-11-18 04:50:05.593770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:42.478 [2024-11-18 04:50:05.593967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63516 ] 00:09:42.478 [2024-11-18 04:50:05.765394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.479 [2024-11-18 04:50:05.931120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.737 [2024-11-18 04:50:06.096943] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:43.325 [2024-11-18 04:50:06.543747] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:43.591 A filename is required. 00:09:43.591 04:50:06 -- common/autotest_common.sh@653 -- # es=234 00:09:43.591 04:50:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:43.591 04:50:06 -- common/autotest_common.sh@662 -- # es=106 00:09:43.591 04:50:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:43.591 04:50:06 -- common/autotest_common.sh@670 -- # es=1 00:09:43.591 04:50:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:43.591 00:09:43.591 real 0m1.382s 00:09:43.591 user 0m1.135s 00:09:43.591 sys 0m0.156s 00:09:43.591 04:50:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:43.591 04:50:06 -- common/autotest_common.sh@10 -- # set +x 00:09:43.591 ************************************ 00:09:43.591 END TEST accel_missing_filename 00:09:43.591 ************************************ 00:09:43.591 04:50:06 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:43.591 04:50:06 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:43.591 04:50:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.591 04:50:06 -- common/autotest_common.sh@10 -- # set +x 00:09:43.591 ************************************ 00:09:43.591 START TEST accel_compress_verify 00:09:43.591 ************************************ 00:09:43.591 04:50:06 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:43.591 04:50:06 -- common/autotest_common.sh@650 -- # local es=0 00:09:43.591 04:50:06 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:43.591 04:50:06 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:43.591 04:50:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.591 04:50:06 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:43.591 04:50:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.591 04:50:06 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:43.591 04:50:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:43.591 04:50:06 -- accel/accel.sh@12 -- # build_accel_config 00:09:43.591 04:50:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:43.591 04:50:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:43.591 04:50:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:43.591 04:50:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:43.591 04:50:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:43.591 04:50:06 -- accel/accel.sh@41 -- # local IFS=, 00:09:43.591 04:50:06 -- accel/accel.sh@42 -- # jq -r . 00:09:43.591 [2024-11-18 04:50:07.017005] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:43.591 [2024-11-18 04:50:07.017141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63547 ] 00:09:43.849 [2024-11-18 04:50:07.172505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.849 [2024-11-18 04:50:07.353055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.108 [2024-11-18 04:50:07.524676] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:44.674 [2024-11-18 04:50:07.965115] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:44.932 00:09:44.932 Compression does not support the verify option, aborting. 00:09:44.932 04:50:08 -- common/autotest_common.sh@653 -- # es=161 00:09:44.932 04:50:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:44.932 04:50:08 -- common/autotest_common.sh@662 -- # es=33 00:09:44.932 04:50:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:44.932 04:50:08 -- common/autotest_common.sh@670 -- # es=1 00:09:44.932 04:50:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:44.932 00:09:44.932 real 0m1.368s 00:09:44.932 user 0m1.147s 00:09:44.932 sys 0m0.129s 00:09:44.932 04:50:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:44.932 04:50:08 -- common/autotest_common.sh@10 -- # set +x 00:09:44.932 ************************************ 00:09:44.932 END TEST accel_compress_verify 00:09:44.932 ************************************ 00:09:44.932 04:50:08 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:44.932 04:50:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:44.932 04:50:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.932 04:50:08 -- common/autotest_common.sh@10 -- # set +x 00:09:44.932 ************************************ 00:09:44.932 START TEST accel_wrong_workload 00:09:44.932 ************************************ 00:09:44.932 04:50:08 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:09:44.932 04:50:08 -- common/autotest_common.sh@650 -- # local es=0 00:09:44.932 04:50:08 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:44.932 04:50:08 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:44.932 04:50:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.932 04:50:08 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:44.932 04:50:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.932 04:50:08 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:09:44.932 04:50:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:44.932 04:50:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:44.932 04:50:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:44.932 04:50:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.932 04:50:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.932 04:50:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:44.932 04:50:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:44.932 04:50:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:44.932 04:50:08 -- accel/accel.sh@42 -- # jq -r . 00:09:44.932 Unsupported workload type: foobar 00:09:44.932 [2024-11-18 04:50:08.430980] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:44.932 accel_perf options: 00:09:44.932 [-h help message] 00:09:44.932 [-q queue depth per core] 00:09:44.932 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:44.932 [-T number of threads per core 00:09:44.932 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:44.932 [-t time in seconds] 00:09:44.932 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:44.932 [ dif_verify, , dif_generate, dif_generate_copy 00:09:44.932 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:44.932 [-l for compress/decompress workloads, name of uncompressed input file 00:09:44.932 [-S for crc32c workload, use this seed value (default 0) 00:09:44.932 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:44.932 [-f for fill workload, use this BYTE value (default 255) 00:09:44.932 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:44.932 [-y verify result if this switch is on] 00:09:44.932 [-a tasks to allocate per core (default: same value as -q)] 00:09:44.932 Can be used to spread operations across a wider range of memory. 00:09:45.191 04:50:08 -- common/autotest_common.sh@653 -- # es=1 00:09:45.191 04:50:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:45.191 04:50:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:45.191 04:50:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:45.191 00:09:45.191 real 0m0.061s 00:09:45.191 user 0m0.040s 00:09:45.191 sys 0m0.028s 00:09:45.191 04:50:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:45.191 ************************************ 00:09:45.191 04:50:08 -- common/autotest_common.sh@10 -- # set +x 00:09:45.191 END TEST accel_wrong_workload 00:09:45.191 ************************************ 00:09:45.191 04:50:08 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:45.191 04:50:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:45.191 04:50:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.191 04:50:08 -- common/autotest_common.sh@10 -- # set +x 00:09:45.191 ************************************ 00:09:45.191 START TEST accel_negative_buffers 00:09:45.191 ************************************ 00:09:45.191 04:50:08 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:45.191 04:50:08 -- common/autotest_common.sh@650 -- # local es=0 00:09:45.191 04:50:08 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:45.191 04:50:08 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:45.191 04:50:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.191 04:50:08 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:45.191 04:50:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.191 04:50:08 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:09:45.191 04:50:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:45.191 04:50:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:45.191 04:50:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:45.191 04:50:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:45.191 04:50:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:45.191 04:50:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:45.191 04:50:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:45.191 04:50:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:45.191 04:50:08 -- accel/accel.sh@42 -- # jq -r . 00:09:45.191 -x option must be non-negative. 00:09:45.191 [2024-11-18 04:50:08.540497] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:45.191 accel_perf options: 00:09:45.191 [-h help message] 00:09:45.191 [-q queue depth per core] 00:09:45.191 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:45.191 [-T number of threads per core 00:09:45.191 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:45.191 [-t time in seconds] 00:09:45.191 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:45.191 [ dif_verify, , dif_generate, dif_generate_copy 00:09:45.191 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:45.191 [-l for compress/decompress workloads, name of uncompressed input file 00:09:45.191 [-S for crc32c workload, use this seed value (default 0) 00:09:45.191 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:45.191 [-f for fill workload, use this BYTE value (default 255) 00:09:45.191 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:45.191 [-y verify result if this switch is on] 00:09:45.191 [-a tasks to allocate per core (default: same value as -q)] 00:09:45.191 Can be used to spread operations across a wider range of memory. 00:09:45.191 04:50:08 -- common/autotest_common.sh@653 -- # es=1 00:09:45.191 04:50:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:45.191 04:50:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:45.191 04:50:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:45.191 00:09:45.191 real 0m0.063s 00:09:45.191 user 0m0.034s 00:09:45.191 sys 0m0.037s 00:09:45.191 04:50:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:45.191 04:50:08 -- common/autotest_common.sh@10 -- # set +x 00:09:45.191 ************************************ 00:09:45.191 END TEST accel_negative_buffers 00:09:45.191 ************************************ 00:09:45.191 04:50:08 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:45.191 04:50:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:45.191 04:50:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.191 04:50:08 -- common/autotest_common.sh@10 -- # set +x 00:09:45.191 ************************************ 00:09:45.191 START TEST accel_crc32c 00:09:45.191 ************************************ 00:09:45.191 04:50:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:45.191 04:50:08 -- accel/accel.sh@16 -- # local accel_opc 00:09:45.191 04:50:08 -- accel/accel.sh@17 -- # local accel_module 00:09:45.191 04:50:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:45.191 04:50:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:45.191 04:50:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:45.191 04:50:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:45.191 04:50:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:45.191 04:50:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:45.191 04:50:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:45.191 04:50:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:45.191 04:50:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:45.191 04:50:08 -- accel/accel.sh@42 -- # jq -r . 00:09:45.191 [2024-11-18 04:50:08.648485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:45.191 [2024-11-18 04:50:08.648633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63625 ] 00:09:45.450 [2024-11-18 04:50:08.802647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.709 [2024-11-18 04:50:08.979472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.607 04:50:10 -- accel/accel.sh@18 -- # out=' 00:09:47.607 SPDK Configuration: 00:09:47.607 Core mask: 0x1 00:09:47.607 00:09:47.607 Accel Perf Configuration: 00:09:47.607 Workload Type: crc32c 00:09:47.607 CRC-32C seed: 32 00:09:47.607 Transfer size: 4096 bytes 00:09:47.607 Vector count 1 00:09:47.607 Module: software 00:09:47.607 Queue depth: 32 00:09:47.607 Allocate depth: 32 00:09:47.607 # threads/core: 1 00:09:47.607 Run time: 1 seconds 00:09:47.607 Verify: Yes 00:09:47.607 00:09:47.607 Running for 1 seconds... 00:09:47.607 00:09:47.607 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:47.607 ------------------------------------------------------------------------------------ 00:09:47.607 0,0 413696/s 1616 MiB/s 0 0 00:09:47.607 ==================================================================================== 00:09:47.607 Total 413696/s 1616 MiB/s 0 0' 00:09:47.607 04:50:10 -- accel/accel.sh@20 -- # IFS=: 00:09:47.607 04:50:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:47.607 04:50:10 -- accel/accel.sh@20 -- # read -r var val 00:09:47.607 04:50:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:47.607 04:50:10 -- accel/accel.sh@12 -- # build_accel_config 00:09:47.607 04:50:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:47.607 04:50:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:47.607 04:50:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:47.607 04:50:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:47.607 04:50:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:47.607 04:50:10 -- accel/accel.sh@41 -- # local IFS=, 00:09:47.607 04:50:10 -- accel/accel.sh@42 -- # jq -r . 00:09:47.607 [2024-11-18 04:50:11.009769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:47.607 [2024-11-18 04:50:11.009923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63651 ] 00:09:47.864 [2024-11-18 04:50:11.180925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.864 [2024-11-18 04:50:11.352331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.122 04:50:11 -- accel/accel.sh@21 -- # val= 00:09:48.122 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.122 04:50:11 -- accel/accel.sh@21 -- # val= 00:09:48.122 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.122 04:50:11 -- accel/accel.sh@21 -- # val=0x1 00:09:48.122 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.122 04:50:11 -- accel/accel.sh@21 -- # val= 00:09:48.122 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.122 04:50:11 -- accel/accel.sh@21 -- # val= 00:09:48.122 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.122 04:50:11 -- accel/accel.sh@21 -- # val=crc32c 00:09:48.122 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.122 04:50:11 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.122 04:50:11 -- accel/accel.sh@21 -- # val=32 00:09:48.122 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.122 04:50:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:48.122 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.122 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.122 04:50:11 -- accel/accel.sh@21 -- # val= 00:09:48.123 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.123 04:50:11 -- accel/accel.sh@21 -- # val=software 00:09:48.123 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.123 04:50:11 -- accel/accel.sh@23 -- # accel_module=software 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.123 04:50:11 -- accel/accel.sh@21 -- # val=32 00:09:48.123 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.123 04:50:11 -- accel/accel.sh@21 -- # val=32 00:09:48.123 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.123 04:50:11 -- accel/accel.sh@21 -- # val=1 00:09:48.123 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.123 04:50:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:48.123 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.123 04:50:11 -- accel/accel.sh@21 -- # val=Yes 00:09:48.123 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.123 04:50:11 -- accel/accel.sh@21 -- # val= 00:09:48.123 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:48.123 04:50:11 -- accel/accel.sh@21 -- # val= 00:09:48.123 04:50:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # IFS=: 00:09:48.123 04:50:11 -- accel/accel.sh@20 -- # read -r var val 00:09:50.023 04:50:13 -- accel/accel.sh@21 -- # val= 00:09:50.023 04:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # IFS=: 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # read -r var val 00:09:50.023 04:50:13 -- accel/accel.sh@21 -- # val= 00:09:50.023 04:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # IFS=: 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # read -r var val 00:09:50.023 04:50:13 -- accel/accel.sh@21 -- # val= 00:09:50.023 04:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # IFS=: 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # read -r var val 00:09:50.023 04:50:13 -- accel/accel.sh@21 -- # val= 00:09:50.023 04:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # IFS=: 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # read -r var val 00:09:50.023 04:50:13 -- accel/accel.sh@21 -- # val= 00:09:50.023 04:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # IFS=: 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # read -r var val 00:09:50.023 04:50:13 -- accel/accel.sh@21 -- # val= 00:09:50.023 04:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # IFS=: 00:09:50.023 04:50:13 -- accel/accel.sh@20 -- # read -r var val 00:09:50.023 04:50:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:50.023 04:50:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:50.023 04:50:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:50.023 00:09:50.023 real 0m4.759s 00:09:50.023 user 0m4.259s 00:09:50.023 sys 0m0.313s 00:09:50.023 04:50:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:50.023 ************************************ 00:09:50.023 END TEST accel_crc32c 00:09:50.023 04:50:13 -- common/autotest_common.sh@10 -- # set +x 00:09:50.023 ************************************ 00:09:50.023 04:50:13 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:50.023 04:50:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:50.023 04:50:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:50.023 04:50:13 -- common/autotest_common.sh@10 -- # set +x 00:09:50.023 ************************************ 00:09:50.023 START TEST accel_crc32c_C2 00:09:50.023 ************************************ 00:09:50.023 04:50:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:50.023 04:50:13 -- accel/accel.sh@16 -- # local accel_opc 00:09:50.023 04:50:13 -- accel/accel.sh@17 -- # local accel_module 00:09:50.023 04:50:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:50.023 04:50:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:50.023 04:50:13 -- accel/accel.sh@12 -- # build_accel_config 00:09:50.023 04:50:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:50.023 04:50:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:50.023 04:50:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.023 04:50:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:50.023 04:50:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:50.023 04:50:13 -- accel/accel.sh@41 -- # local IFS=, 00:09:50.023 04:50:13 -- accel/accel.sh@42 -- # jq -r . 00:09:50.023 [2024-11-18 04:50:13.456111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:50.024 [2024-11-18 04:50:13.456283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63699 ] 00:09:50.282 [2024-11-18 04:50:13.626370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.540 [2024-11-18 04:50:13.814640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.440 04:50:15 -- accel/accel.sh@18 -- # out=' 00:09:52.440 SPDK Configuration: 00:09:52.440 Core mask: 0x1 00:09:52.440 00:09:52.440 Accel Perf Configuration: 00:09:52.440 Workload Type: crc32c 00:09:52.440 CRC-32C seed: 0 00:09:52.440 Transfer size: 4096 bytes 00:09:52.440 Vector count 2 00:09:52.440 Module: software 00:09:52.440 Queue depth: 32 00:09:52.440 Allocate depth: 32 00:09:52.440 # threads/core: 1 00:09:52.440 Run time: 1 seconds 00:09:52.440 Verify: Yes 00:09:52.440 00:09:52.440 Running for 1 seconds... 00:09:52.440 00:09:52.440 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:52.440 ------------------------------------------------------------------------------------ 00:09:52.440 0,0 308608/s 2411 MiB/s 0 0 00:09:52.440 ==================================================================================== 00:09:52.440 Total 308608/s 1205 MiB/s 0 0' 00:09:52.440 04:50:15 -- accel/accel.sh@20 -- # IFS=: 00:09:52.440 04:50:15 -- accel/accel.sh@20 -- # read -r var val 00:09:52.440 04:50:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:52.440 04:50:15 -- accel/accel.sh@12 -- # build_accel_config 00:09:52.440 04:50:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:52.440 04:50:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:52.440 04:50:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.440 04:50:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.440 04:50:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:52.440 04:50:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:52.440 04:50:15 -- accel/accel.sh@41 -- # local IFS=, 00:09:52.440 04:50:15 -- accel/accel.sh@42 -- # jq -r . 00:09:52.440 [2024-11-18 04:50:15.909069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:52.440 [2024-11-18 04:50:15.909279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63725 ] 00:09:52.699 [2024-11-18 04:50:16.092236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.958 [2024-11-18 04:50:16.282307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val= 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val= 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val=0x1 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val= 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val= 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val=crc32c 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val=0 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val= 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val=software 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@23 -- # accel_module=software 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val=32 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val=32 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val=1 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val=Yes 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val= 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:52.958 04:50:16 -- accel/accel.sh@21 -- # val= 00:09:52.958 04:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # IFS=: 00:09:52.958 04:50:16 -- accel/accel.sh@20 -- # read -r var val 00:09:54.859 04:50:18 -- accel/accel.sh@21 -- # val= 00:09:54.859 04:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # IFS=: 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # read -r var val 00:09:54.859 04:50:18 -- accel/accel.sh@21 -- # val= 00:09:54.859 04:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # IFS=: 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # read -r var val 00:09:54.859 04:50:18 -- accel/accel.sh@21 -- # val= 00:09:54.859 04:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # IFS=: 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # read -r var val 00:09:54.859 04:50:18 -- accel/accel.sh@21 -- # val= 00:09:54.859 04:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # IFS=: 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # read -r var val 00:09:54.859 04:50:18 -- accel/accel.sh@21 -- # val= 00:09:54.859 04:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # IFS=: 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # read -r var val 00:09:54.859 04:50:18 -- accel/accel.sh@21 -- # val= 00:09:54.859 04:50:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # IFS=: 00:09:54.859 04:50:18 -- accel/accel.sh@20 -- # read -r var val 00:09:54.859 04:50:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:54.859 04:50:18 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:54.859 04:50:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:54.859 00:09:54.859 real 0m4.921s 00:09:54.859 user 0m4.387s 00:09:54.859 sys 0m0.349s 00:09:54.859 04:50:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:54.859 04:50:18 -- common/autotest_common.sh@10 -- # set +x 00:09:54.859 ************************************ 00:09:54.859 END TEST accel_crc32c_C2 00:09:54.859 ************************************ 00:09:54.859 04:50:18 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:54.859 04:50:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:54.859 04:50:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.859 04:50:18 -- common/autotest_common.sh@10 -- # set +x 00:09:55.118 ************************************ 00:09:55.118 START TEST accel_copy 00:09:55.118 ************************************ 00:09:55.118 04:50:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:09:55.118 04:50:18 -- accel/accel.sh@16 -- # local accel_opc 00:09:55.118 04:50:18 -- accel/accel.sh@17 -- # local accel_module 00:09:55.118 04:50:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:09:55.118 04:50:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:55.118 04:50:18 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.118 04:50:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.118 04:50:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.118 04:50:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.118 04:50:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.118 04:50:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.118 04:50:18 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.118 04:50:18 -- accel/accel.sh@42 -- # jq -r . 00:09:55.118 [2024-11-18 04:50:18.423354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:55.118 [2024-11-18 04:50:18.423723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63771 ] 00:09:55.118 [2024-11-18 04:50:18.596458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.377 [2024-11-18 04:50:18.784513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.907 04:50:20 -- accel/accel.sh@18 -- # out=' 00:09:57.907 SPDK Configuration: 00:09:57.907 Core mask: 0x1 00:09:57.907 00:09:57.907 Accel Perf Configuration: 00:09:57.907 Workload Type: copy 00:09:57.907 Transfer size: 4096 bytes 00:09:57.907 Vector count 1 00:09:57.907 Module: software 00:09:57.907 Queue depth: 32 00:09:57.907 Allocate depth: 32 00:09:57.907 # threads/core: 1 00:09:57.907 Run time: 1 seconds 00:09:57.907 Verify: Yes 00:09:57.907 00:09:57.907 Running for 1 seconds... 00:09:57.907 00:09:57.907 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:57.907 ------------------------------------------------------------------------------------ 00:09:57.907 0,0 238240/s 930 MiB/s 0 0 00:09:57.907 ==================================================================================== 00:09:57.907 Total 238240/s 930 MiB/s 0 0' 00:09:57.907 04:50:20 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:20 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:57.907 04:50:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:57.907 04:50:20 -- accel/accel.sh@12 -- # build_accel_config 00:09:57.907 04:50:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:57.907 04:50:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:57.907 04:50:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:57.907 04:50:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:57.907 04:50:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:57.907 04:50:20 -- accel/accel.sh@41 -- # local IFS=, 00:09:57.907 04:50:20 -- accel/accel.sh@42 -- # jq -r . 00:09:57.907 [2024-11-18 04:50:20.871888] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:57.907 [2024-11-18 04:50:20.872061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63803 ] 00:09:57.907 [2024-11-18 04:50:21.041530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.907 [2024-11-18 04:50:21.227457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val= 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val= 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val=0x1 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val= 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val= 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val=copy 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@24 -- # accel_opc=copy 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val= 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val=software 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@23 -- # accel_module=software 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val=32 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val=32 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val=1 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val=Yes 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val= 00:09:57.907 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.907 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:57.907 04:50:21 -- accel/accel.sh@21 -- # val= 00:09:57.908 04:50:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.908 04:50:21 -- accel/accel.sh@20 -- # IFS=: 00:09:57.908 04:50:21 -- accel/accel.sh@20 -- # read -r var val 00:09:59.838 04:50:23 -- accel/accel.sh@21 -- # val= 00:09:59.838 04:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # IFS=: 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # read -r var val 00:09:59.838 04:50:23 -- accel/accel.sh@21 -- # val= 00:09:59.838 04:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # IFS=: 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # read -r var val 00:09:59.838 04:50:23 -- accel/accel.sh@21 -- # val= 00:09:59.838 04:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # IFS=: 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # read -r var val 00:09:59.838 04:50:23 -- accel/accel.sh@21 -- # val= 00:09:59.838 04:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # IFS=: 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # read -r var val 00:09:59.838 04:50:23 -- accel/accel.sh@21 -- # val= 00:09:59.838 04:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # IFS=: 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # read -r var val 00:09:59.838 04:50:23 -- accel/accel.sh@21 -- # val= 00:09:59.838 04:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # IFS=: 00:09:59.838 04:50:23 -- accel/accel.sh@20 -- # read -r var val 00:09:59.838 04:50:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:59.838 04:50:23 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:09:59.838 04:50:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:59.838 00:09:59.838 real 0m4.846s 00:09:59.838 user 0m4.337s 00:09:59.838 sys 0m0.324s 00:09:59.838 04:50:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:59.838 04:50:23 -- common/autotest_common.sh@10 -- # set +x 00:09:59.838 ************************************ 00:09:59.838 END TEST accel_copy 00:09:59.838 ************************************ 00:09:59.838 04:50:23 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:59.838 04:50:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:59.838 04:50:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.838 04:50:23 -- common/autotest_common.sh@10 -- # set +x 00:09:59.838 ************************************ 00:09:59.838 START TEST accel_fill 00:09:59.838 ************************************ 00:09:59.838 04:50:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:59.838 04:50:23 -- accel/accel.sh@16 -- # local accel_opc 00:09:59.838 04:50:23 -- accel/accel.sh@17 -- # local accel_module 00:09:59.838 04:50:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:59.838 04:50:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:59.838 04:50:23 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.838 04:50:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.838 04:50:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.838 04:50:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.838 04:50:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.838 04:50:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.838 04:50:23 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.838 04:50:23 -- accel/accel.sh@42 -- # jq -r . 00:09:59.838 [2024-11-18 04:50:23.324764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:59.838 [2024-11-18 04:50:23.324902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63844 ] 00:10:00.097 [2024-11-18 04:50:23.480881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.356 [2024-11-18 04:50:23.660774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.258 04:50:25 -- accel/accel.sh@18 -- # out=' 00:10:02.258 SPDK Configuration: 00:10:02.258 Core mask: 0x1 00:10:02.258 00:10:02.258 Accel Perf Configuration: 00:10:02.258 Workload Type: fill 00:10:02.258 Fill pattern: 0x80 00:10:02.258 Transfer size: 4096 bytes 00:10:02.258 Vector count 1 00:10:02.258 Module: software 00:10:02.258 Queue depth: 64 00:10:02.258 Allocate depth: 64 00:10:02.258 # threads/core: 1 00:10:02.258 Run time: 1 seconds 00:10:02.258 Verify: Yes 00:10:02.258 00:10:02.258 Running for 1 seconds... 00:10:02.258 00:10:02.258 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:02.258 ------------------------------------------------------------------------------------ 00:10:02.258 0,0 390336/s 1524 MiB/s 0 0 00:10:02.258 ==================================================================================== 00:10:02.258 Total 390336/s 1524 MiB/s 0 0' 00:10:02.258 04:50:25 -- accel/accel.sh@20 -- # IFS=: 00:10:02.258 04:50:25 -- accel/accel.sh@20 -- # read -r var val 00:10:02.258 04:50:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:02.258 04:50:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:02.258 04:50:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:02.258 04:50:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:02.258 04:50:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.258 04:50:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.258 04:50:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:02.258 04:50:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:02.258 04:50:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:02.258 04:50:25 -- accel/accel.sh@42 -- # jq -r . 00:10:02.258 [2024-11-18 04:50:25.685519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:02.258 [2024-11-18 04:50:25.685863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63876 ] 00:10:02.517 [2024-11-18 04:50:25.843424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.517 [2024-11-18 04:50:26.020720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val= 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val= 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val=0x1 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val= 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val= 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val=fill 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val=0x80 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val= 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val=software 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@23 -- # accel_module=software 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.775 04:50:26 -- accel/accel.sh@21 -- # val=64 00:10:02.775 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.775 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.776 04:50:26 -- accel/accel.sh@21 -- # val=64 00:10:02.776 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.776 04:50:26 -- accel/accel.sh@21 -- # val=1 00:10:02.776 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.776 04:50:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:02.776 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.776 04:50:26 -- accel/accel.sh@21 -- # val=Yes 00:10:02.776 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.776 04:50:26 -- accel/accel.sh@21 -- # val= 00:10:02.776 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:02.776 04:50:26 -- accel/accel.sh@21 -- # val= 00:10:02.776 04:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # IFS=: 00:10:02.776 04:50:26 -- accel/accel.sh@20 -- # read -r var val 00:10:04.679 04:50:27 -- accel/accel.sh@21 -- # val= 00:10:04.680 04:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # IFS=: 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # read -r var val 00:10:04.680 04:50:27 -- accel/accel.sh@21 -- # val= 00:10:04.680 04:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # IFS=: 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # read -r var val 00:10:04.680 04:50:27 -- accel/accel.sh@21 -- # val= 00:10:04.680 04:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # IFS=: 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # read -r var val 00:10:04.680 04:50:27 -- accel/accel.sh@21 -- # val= 00:10:04.680 04:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # IFS=: 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # read -r var val 00:10:04.680 04:50:27 -- accel/accel.sh@21 -- # val= 00:10:04.680 04:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # IFS=: 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # read -r var val 00:10:04.680 04:50:27 -- accel/accel.sh@21 -- # val= 00:10:04.680 04:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # IFS=: 00:10:04.680 04:50:27 -- accel/accel.sh@20 -- # read -r var val 00:10:04.680 04:50:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:04.680 04:50:27 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:04.680 04:50:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:04.680 00:10:04.680 real 0m4.702s 00:10:04.680 user 0m4.242s 00:10:04.680 sys 0m0.275s 00:10:04.680 ************************************ 00:10:04.680 END TEST accel_fill 00:10:04.680 ************************************ 00:10:04.680 04:50:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.680 04:50:27 -- common/autotest_common.sh@10 -- # set +x 00:10:04.680 04:50:28 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:04.680 04:50:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:04.680 04:50:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.680 04:50:28 -- common/autotest_common.sh@10 -- # set +x 00:10:04.680 ************************************ 00:10:04.680 START TEST accel_copy_crc32c 00:10:04.680 ************************************ 00:10:04.680 04:50:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:10:04.680 04:50:28 -- accel/accel.sh@16 -- # local accel_opc 00:10:04.680 04:50:28 -- accel/accel.sh@17 -- # local accel_module 00:10:04.680 04:50:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:04.680 04:50:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:04.680 04:50:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:04.680 04:50:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:04.680 04:50:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.680 04:50:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.680 04:50:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:04.680 04:50:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:04.680 04:50:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:04.680 04:50:28 -- accel/accel.sh@42 -- # jq -r . 00:10:04.680 [2024-11-18 04:50:28.081814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.680 [2024-11-18 04:50:28.081961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63917 ] 00:10:04.939 [2024-11-18 04:50:28.250713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.939 [2024-11-18 04:50:28.430241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.483 04:50:30 -- accel/accel.sh@18 -- # out=' 00:10:07.483 SPDK Configuration: 00:10:07.483 Core mask: 0x1 00:10:07.483 00:10:07.483 Accel Perf Configuration: 00:10:07.483 Workload Type: copy_crc32c 00:10:07.483 CRC-32C seed: 0 00:10:07.483 Vector size: 4096 bytes 00:10:07.483 Transfer size: 4096 bytes 00:10:07.483 Vector count 1 00:10:07.483 Module: software 00:10:07.484 Queue depth: 32 00:10:07.484 Allocate depth: 32 00:10:07.484 # threads/core: 1 00:10:07.484 Run time: 1 seconds 00:10:07.484 Verify: Yes 00:10:07.484 00:10:07.484 Running for 1 seconds... 00:10:07.484 00:10:07.484 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:07.484 ------------------------------------------------------------------------------------ 00:10:07.484 0,0 203456/s 794 MiB/s 0 0 00:10:07.484 ==================================================================================== 00:10:07.484 Total 203456/s 794 MiB/s 0 0' 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:07.484 04:50:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:07.484 04:50:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:07.484 04:50:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.484 04:50:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.484 04:50:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:07.484 04:50:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:07.484 04:50:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:07.484 04:50:30 -- accel/accel.sh@42 -- # jq -r . 00:10:07.484 [2024-11-18 04:50:30.478690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:07.484 [2024-11-18 04:50:30.478850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63947 ] 00:10:07.484 [2024-11-18 04:50:30.649214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.484 [2024-11-18 04:50:30.814715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val= 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val= 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val=0x1 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val= 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val= 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val=0 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val= 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val=software 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@23 -- # accel_module=software 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val=32 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val=32 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val=1 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val=Yes 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val= 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:07.484 04:50:30 -- accel/accel.sh@21 -- # val= 00:10:07.484 04:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # IFS=: 00:10:07.484 04:50:30 -- accel/accel.sh@20 -- # read -r var val 00:10:09.390 04:50:32 -- accel/accel.sh@21 -- # val= 00:10:09.390 04:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.390 04:50:32 -- accel/accel.sh@21 -- # val= 00:10:09.390 04:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.390 04:50:32 -- accel/accel.sh@21 -- # val= 00:10:09.390 04:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.390 04:50:32 -- accel/accel.sh@21 -- # val= 00:10:09.390 04:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.390 04:50:32 -- accel/accel.sh@21 -- # val= 00:10:09.390 04:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.390 04:50:32 -- accel/accel.sh@21 -- # val= 00:10:09.390 04:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.390 04:50:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.390 04:50:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:09.390 04:50:32 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:09.390 04:50:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:09.390 00:10:09.390 real 0m4.688s 00:10:09.390 user 0m4.199s 00:10:09.390 sys 0m0.304s 00:10:09.390 04:50:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:09.390 ************************************ 00:10:09.390 END TEST accel_copy_crc32c 00:10:09.390 ************************************ 00:10:09.390 04:50:32 -- common/autotest_common.sh@10 -- # set +x 00:10:09.390 04:50:32 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:09.390 04:50:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:09.390 04:50:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:09.390 04:50:32 -- common/autotest_common.sh@10 -- # set +x 00:10:09.390 ************************************ 00:10:09.390 START TEST accel_copy_crc32c_C2 00:10:09.390 ************************************ 00:10:09.390 04:50:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:09.390 04:50:32 -- accel/accel.sh@16 -- # local accel_opc 00:10:09.390 04:50:32 -- accel/accel.sh@17 -- # local accel_module 00:10:09.390 04:50:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:09.390 04:50:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:09.390 04:50:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.390 04:50:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:09.390 04:50:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.390 04:50:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.390 04:50:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:09.390 04:50:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:09.390 04:50:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:09.390 04:50:32 -- accel/accel.sh@42 -- # jq -r . 00:10:09.390 [2024-11-18 04:50:32.825750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:09.390 [2024-11-18 04:50:32.825923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63995 ] 00:10:09.649 [2024-11-18 04:50:33.000123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.909 [2024-11-18 04:50:33.203972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.824 04:50:35 -- accel/accel.sh@18 -- # out=' 00:10:11.824 SPDK Configuration: 00:10:11.824 Core mask: 0x1 00:10:11.824 00:10:11.824 Accel Perf Configuration: 00:10:11.824 Workload Type: copy_crc32c 00:10:11.824 CRC-32C seed: 0 00:10:11.824 Vector size: 4096 bytes 00:10:11.824 Transfer size: 8192 bytes 00:10:11.824 Vector count 2 00:10:11.824 Module: software 00:10:11.824 Queue depth: 32 00:10:11.824 Allocate depth: 32 00:10:11.824 # threads/core: 1 00:10:11.824 Run time: 1 seconds 00:10:11.824 Verify: Yes 00:10:11.824 00:10:11.824 Running for 1 seconds... 00:10:11.824 00:10:11.824 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:11.824 ------------------------------------------------------------------------------------ 00:10:11.824 0,0 157504/s 1230 MiB/s 0 0 00:10:11.824 ==================================================================================== 00:10:11.824 Total 157504/s 615 MiB/s 0 0' 00:10:11.824 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:11.824 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:11.824 04:50:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:11.824 04:50:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:11.824 04:50:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.824 04:50:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.824 04:50:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.824 04:50:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.824 04:50:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.824 04:50:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.824 04:50:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.824 04:50:35 -- accel/accel.sh@42 -- # jq -r . 00:10:11.824 [2024-11-18 04:50:35.179595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:11.824 [2024-11-18 04:50:35.179762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64021 ] 00:10:12.099 [2024-11-18 04:50:35.350597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.099 [2024-11-18 04:50:35.516426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.358 04:50:35 -- accel/accel.sh@21 -- # val= 00:10:12.358 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.358 04:50:35 -- accel/accel.sh@21 -- # val= 00:10:12.358 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.358 04:50:35 -- accel/accel.sh@21 -- # val=0x1 00:10:12.358 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.358 04:50:35 -- accel/accel.sh@21 -- # val= 00:10:12.358 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.358 04:50:35 -- accel/accel.sh@21 -- # val= 00:10:12.358 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.358 04:50:35 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:12.358 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.358 04:50:35 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.358 04:50:35 -- accel/accel.sh@21 -- # val=0 00:10:12.358 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.358 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.358 04:50:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:12.358 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.359 04:50:35 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:12.359 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.359 04:50:35 -- accel/accel.sh@21 -- # val= 00:10:12.359 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.359 04:50:35 -- accel/accel.sh@21 -- # val=software 00:10:12.359 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@23 -- # accel_module=software 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.359 04:50:35 -- accel/accel.sh@21 -- # val=32 00:10:12.359 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.359 04:50:35 -- accel/accel.sh@21 -- # val=32 00:10:12.359 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.359 04:50:35 -- accel/accel.sh@21 -- # val=1 00:10:12.359 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.359 04:50:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:12.359 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.359 04:50:35 -- accel/accel.sh@21 -- # val=Yes 00:10:12.359 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.359 04:50:35 -- accel/accel.sh@21 -- # val= 00:10:12.359 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.359 04:50:35 -- accel/accel.sh@21 -- # val= 00:10:12.359 04:50:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.359 04:50:35 -- accel/accel.sh@20 -- # read -r var val 00:10:14.265 04:50:37 -- accel/accel.sh@21 -- # val= 00:10:14.265 04:50:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.265 04:50:37 -- accel/accel.sh@21 -- # val= 00:10:14.265 04:50:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.265 04:50:37 -- accel/accel.sh@21 -- # val= 00:10:14.265 04:50:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.265 04:50:37 -- accel/accel.sh@21 -- # val= 00:10:14.265 04:50:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.265 04:50:37 -- accel/accel.sh@21 -- # val= 00:10:14.265 04:50:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.265 04:50:37 -- accel/accel.sh@21 -- # val= 00:10:14.265 04:50:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.265 04:50:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.265 04:50:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:14.265 04:50:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:14.265 04:50:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:14.265 00:10:14.265 real 0m4.658s 00:10:14.265 user 0m2.097s 00:10:14.265 sys 0m0.174s 00:10:14.265 04:50:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:14.265 04:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:14.265 ************************************ 00:10:14.265 END TEST accel_copy_crc32c_C2 00:10:14.265 ************************************ 00:10:14.265 04:50:37 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:14.265 04:50:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:14.265 04:50:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.265 04:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:14.265 ************************************ 00:10:14.265 START TEST accel_dualcast 00:10:14.265 ************************************ 00:10:14.265 04:50:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:10:14.265 04:50:37 -- accel/accel.sh@16 -- # local accel_opc 00:10:14.265 04:50:37 -- accel/accel.sh@17 -- # local accel_module 00:10:14.265 04:50:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:14.265 04:50:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:14.265 04:50:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:14.265 04:50:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:14.265 04:50:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.265 04:50:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.265 04:50:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:14.265 04:50:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:14.265 04:50:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:14.265 04:50:37 -- accel/accel.sh@42 -- # jq -r . 00:10:14.265 [2024-11-18 04:50:37.536388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:14.265 [2024-11-18 04:50:37.536538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64062 ] 00:10:14.265 [2024-11-18 04:50:37.703939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.524 [2024-11-18 04:50:37.867532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.427 04:50:39 -- accel/accel.sh@18 -- # out=' 00:10:16.427 SPDK Configuration: 00:10:16.427 Core mask: 0x1 00:10:16.427 00:10:16.427 Accel Perf Configuration: 00:10:16.427 Workload Type: dualcast 00:10:16.427 Transfer size: 4096 bytes 00:10:16.427 Vector count 1 00:10:16.427 Module: software 00:10:16.427 Queue depth: 32 00:10:16.427 Allocate depth: 32 00:10:16.427 # threads/core: 1 00:10:16.427 Run time: 1 seconds 00:10:16.427 Verify: Yes 00:10:16.427 00:10:16.427 Running for 1 seconds... 00:10:16.427 00:10:16.427 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:16.427 ------------------------------------------------------------------------------------ 00:10:16.427 0,0 308832/s 1206 MiB/s 0 0 00:10:16.427 ==================================================================================== 00:10:16.427 Total 308832/s 1206 MiB/s 0 0' 00:10:16.427 04:50:39 -- accel/accel.sh@20 -- # IFS=: 00:10:16.427 04:50:39 -- accel/accel.sh@20 -- # read -r var val 00:10:16.427 04:50:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:16.427 04:50:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:16.427 04:50:39 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.427 04:50:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.427 04:50:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.427 04:50:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.427 04:50:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.427 04:50:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.427 04:50:39 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.427 04:50:39 -- accel/accel.sh@42 -- # jq -r . 00:10:16.427 [2024-11-18 04:50:39.847106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:16.427 [2024-11-18 04:50:39.847479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64088 ] 00:10:16.686 [2024-11-18 04:50:40.014722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.686 [2024-11-18 04:50:40.175831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val= 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val= 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val=0x1 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val= 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val= 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val=dualcast 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val= 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val=software 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@23 -- # accel_module=software 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val=32 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val=32 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val=1 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val=Yes 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val= 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:16.945 04:50:40 -- accel/accel.sh@21 -- # val= 00:10:16.945 04:50:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # IFS=: 00:10:16.945 04:50:40 -- accel/accel.sh@20 -- # read -r var val 00:10:18.850 04:50:42 -- accel/accel.sh@21 -- # val= 00:10:18.850 04:50:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # IFS=: 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # read -r var val 00:10:18.850 04:50:42 -- accel/accel.sh@21 -- # val= 00:10:18.850 04:50:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # IFS=: 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # read -r var val 00:10:18.850 04:50:42 -- accel/accel.sh@21 -- # val= 00:10:18.850 04:50:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # IFS=: 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # read -r var val 00:10:18.850 04:50:42 -- accel/accel.sh@21 -- # val= 00:10:18.850 04:50:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # IFS=: 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # read -r var val 00:10:18.850 04:50:42 -- accel/accel.sh@21 -- # val= 00:10:18.850 04:50:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # IFS=: 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # read -r var val 00:10:18.850 04:50:42 -- accel/accel.sh@21 -- # val= 00:10:18.850 04:50:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # IFS=: 00:10:18.850 04:50:42 -- accel/accel.sh@20 -- # read -r var val 00:10:18.850 04:50:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:18.850 04:50:42 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:18.850 04:50:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:18.850 00:10:18.850 real 0m4.605s 00:10:18.850 user 0m4.129s 00:10:18.850 sys 0m0.288s 00:10:18.850 04:50:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.850 04:50:42 -- common/autotest_common.sh@10 -- # set +x 00:10:18.850 ************************************ 00:10:18.850 END TEST accel_dualcast 00:10:18.850 ************************************ 00:10:18.850 04:50:42 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:18.850 04:50:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:18.850 04:50:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.850 04:50:42 -- common/autotest_common.sh@10 -- # set +x 00:10:18.850 ************************************ 00:10:18.850 START TEST accel_compare 00:10:18.850 ************************************ 00:10:18.850 04:50:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:10:18.850 04:50:42 -- accel/accel.sh@16 -- # local accel_opc 00:10:18.850 04:50:42 -- accel/accel.sh@17 -- # local accel_module 00:10:18.850 04:50:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:18.850 04:50:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:18.850 04:50:42 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.850 04:50:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.850 04:50:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.850 04:50:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.850 04:50:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.850 04:50:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.850 04:50:42 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.850 04:50:42 -- accel/accel.sh@42 -- # jq -r . 00:10:18.850 [2024-11-18 04:50:42.195208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.850 [2024-11-18 04:50:42.195353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64135 ] 00:10:18.850 [2024-11-18 04:50:42.363804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.109 [2024-11-18 04:50:42.524079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.012 04:50:44 -- accel/accel.sh@18 -- # out=' 00:10:21.012 SPDK Configuration: 00:10:21.012 Core mask: 0x1 00:10:21.012 00:10:21.012 Accel Perf Configuration: 00:10:21.012 Workload Type: compare 00:10:21.012 Transfer size: 4096 bytes 00:10:21.012 Vector count 1 00:10:21.012 Module: software 00:10:21.012 Queue depth: 32 00:10:21.012 Allocate depth: 32 00:10:21.012 # threads/core: 1 00:10:21.012 Run time: 1 seconds 00:10:21.012 Verify: Yes 00:10:21.012 00:10:21.012 Running for 1 seconds... 00:10:21.012 00:10:21.012 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:21.012 ------------------------------------------------------------------------------------ 00:10:21.012 0,0 420192/s 1641 MiB/s 0 0 00:10:21.012 ==================================================================================== 00:10:21.012 Total 420192/s 1641 MiB/s 0 0' 00:10:21.012 04:50:44 -- accel/accel.sh@20 -- # IFS=: 00:10:21.012 04:50:44 -- accel/accel.sh@20 -- # read -r var val 00:10:21.012 04:50:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:21.012 04:50:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:21.012 04:50:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.012 04:50:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.012 04:50:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.012 04:50:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.012 04:50:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.012 04:50:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.012 04:50:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.012 04:50:44 -- accel/accel.sh@42 -- # jq -r . 00:10:21.012 [2024-11-18 04:50:44.492777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:21.012 [2024-11-18 04:50:44.493251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64161 ] 00:10:21.271 [2024-11-18 04:50:44.662314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.531 [2024-11-18 04:50:44.838755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.531 04:50:44 -- accel/accel.sh@21 -- # val= 00:10:21.531 04:50:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:44 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:44 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:44 -- accel/accel.sh@21 -- # val= 00:10:21.531 04:50:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:44 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:44 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:44 -- accel/accel.sh@21 -- # val=0x1 00:10:21.531 04:50:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:44 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:44 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:44 -- accel/accel.sh@21 -- # val= 00:10:21.531 04:50:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:44 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:44 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:44 -- accel/accel.sh@21 -- # val= 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val=compare 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val= 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val=software 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@23 -- # accel_module=software 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val=32 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val=32 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val=1 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val=Yes 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val= 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:21.531 04:50:45 -- accel/accel.sh@21 -- # val= 00:10:21.531 04:50:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # IFS=: 00:10:21.531 04:50:45 -- accel/accel.sh@20 -- # read -r var val 00:10:23.437 04:50:46 -- accel/accel.sh@21 -- # val= 00:10:23.437 04:50:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # IFS=: 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # read -r var val 00:10:23.437 04:50:46 -- accel/accel.sh@21 -- # val= 00:10:23.437 04:50:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # IFS=: 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # read -r var val 00:10:23.437 04:50:46 -- accel/accel.sh@21 -- # val= 00:10:23.437 04:50:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # IFS=: 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # read -r var val 00:10:23.437 04:50:46 -- accel/accel.sh@21 -- # val= 00:10:23.437 04:50:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # IFS=: 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # read -r var val 00:10:23.437 04:50:46 -- accel/accel.sh@21 -- # val= 00:10:23.437 04:50:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # IFS=: 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # read -r var val 00:10:23.437 04:50:46 -- accel/accel.sh@21 -- # val= 00:10:23.437 04:50:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # IFS=: 00:10:23.437 04:50:46 -- accel/accel.sh@20 -- # read -r var val 00:10:23.437 ************************************ 00:10:23.437 END TEST accel_compare 00:10:23.437 ************************************ 00:10:23.437 04:50:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:23.437 04:50:46 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:23.437 04:50:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.437 00:10:23.437 real 0m4.592s 00:10:23.437 user 0m4.098s 00:10:23.437 sys 0m0.310s 00:10:23.437 04:50:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:23.437 04:50:46 -- common/autotest_common.sh@10 -- # set +x 00:10:23.437 04:50:46 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:23.437 04:50:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:23.437 04:50:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.437 04:50:46 -- common/autotest_common.sh@10 -- # set +x 00:10:23.437 ************************************ 00:10:23.437 START TEST accel_xor 00:10:23.437 ************************************ 00:10:23.437 04:50:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:10:23.437 04:50:46 -- accel/accel.sh@16 -- # local accel_opc 00:10:23.437 04:50:46 -- accel/accel.sh@17 -- # local accel_module 00:10:23.437 04:50:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:23.437 04:50:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:23.437 04:50:46 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.437 04:50:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:23.437 04:50:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.437 04:50:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.437 04:50:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:23.437 04:50:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:23.437 04:50:46 -- accel/accel.sh@41 -- # local IFS=, 00:10:23.437 04:50:46 -- accel/accel.sh@42 -- # jq -r . 00:10:23.437 [2024-11-18 04:50:46.845123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:23.437 [2024-11-18 04:50:46.845314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64202 ] 00:10:23.697 [2024-11-18 04:50:47.012678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.697 [2024-11-18 04:50:47.172446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.602 04:50:49 -- accel/accel.sh@18 -- # out=' 00:10:25.602 SPDK Configuration: 00:10:25.602 Core mask: 0x1 00:10:25.602 00:10:25.602 Accel Perf Configuration: 00:10:25.602 Workload Type: xor 00:10:25.602 Source buffers: 2 00:10:25.602 Transfer size: 4096 bytes 00:10:25.602 Vector count 1 00:10:25.602 Module: software 00:10:25.602 Queue depth: 32 00:10:25.602 Allocate depth: 32 00:10:25.602 # threads/core: 1 00:10:25.602 Run time: 1 seconds 00:10:25.602 Verify: Yes 00:10:25.602 00:10:25.602 Running for 1 seconds... 00:10:25.602 00:10:25.602 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:25.602 ------------------------------------------------------------------------------------ 00:10:25.602 0,0 223488/s 873 MiB/s 0 0 00:10:25.602 ==================================================================================== 00:10:25.602 Total 223488/s 873 MiB/s 0 0' 00:10:25.602 04:50:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:25.602 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:25.602 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:25.602 04:50:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:25.602 04:50:49 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.602 04:50:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.602 04:50:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.602 04:50:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.602 04:50:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.602 04:50:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.602 04:50:49 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.602 04:50:49 -- accel/accel.sh@42 -- # jq -r . 00:10:25.602 [2024-11-18 04:50:49.121243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:25.602 [2024-11-18 04:50:49.121436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64234 ] 00:10:25.860 [2024-11-18 04:50:49.304632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.118 [2024-11-18 04:50:49.461807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.118 04:50:49 -- accel/accel.sh@21 -- # val= 00:10:26.118 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.118 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.118 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.118 04:50:49 -- accel/accel.sh@21 -- # val= 00:10:26.118 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.118 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.118 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.118 04:50:49 -- accel/accel.sh@21 -- # val=0x1 00:10:26.118 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val= 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val= 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val=xor 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val=2 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val= 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val=software 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@23 -- # accel_module=software 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val=32 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val=32 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val=1 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val=Yes 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val= 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:26.119 04:50:49 -- accel/accel.sh@21 -- # val= 00:10:26.119 04:50:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # IFS=: 00:10:26.119 04:50:49 -- accel/accel.sh@20 -- # read -r var val 00:10:28.023 04:50:51 -- accel/accel.sh@21 -- # val= 00:10:28.023 04:50:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.023 04:50:51 -- accel/accel.sh@21 -- # val= 00:10:28.023 04:50:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.023 04:50:51 -- accel/accel.sh@21 -- # val= 00:10:28.023 04:50:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.023 04:50:51 -- accel/accel.sh@21 -- # val= 00:10:28.023 04:50:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.023 04:50:51 -- accel/accel.sh@21 -- # val= 00:10:28.023 04:50:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.023 04:50:51 -- accel/accel.sh@21 -- # val= 00:10:28.023 04:50:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.023 04:50:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.023 04:50:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:28.023 04:50:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:28.023 04:50:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:28.023 00:10:28.023 real 0m4.566s 00:10:28.023 user 0m4.083s 00:10:28.023 sys 0m0.299s 00:10:28.023 04:50:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:28.023 ************************************ 00:10:28.023 END TEST accel_xor 00:10:28.023 ************************************ 00:10:28.023 04:50:51 -- common/autotest_common.sh@10 -- # set +x 00:10:28.023 04:50:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:28.023 04:50:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:28.023 04:50:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:28.023 04:50:51 -- common/autotest_common.sh@10 -- # set +x 00:10:28.023 ************************************ 00:10:28.023 START TEST accel_xor 00:10:28.023 ************************************ 00:10:28.023 04:50:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:10:28.023 04:50:51 -- accel/accel.sh@16 -- # local accel_opc 00:10:28.023 04:50:51 -- accel/accel.sh@17 -- # local accel_module 00:10:28.023 04:50:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:28.023 04:50:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:28.023 04:50:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:28.023 04:50:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:28.024 04:50:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.024 04:50:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.024 04:50:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:28.024 04:50:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:28.024 04:50:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:28.024 04:50:51 -- accel/accel.sh@42 -- # jq -r . 00:10:28.024 [2024-11-18 04:50:51.455350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:28.024 [2024-11-18 04:50:51.455501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64280 ] 00:10:28.283 [2024-11-18 04:50:51.624870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.283 [2024-11-18 04:50:51.782434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.188 04:50:53 -- accel/accel.sh@18 -- # out=' 00:10:30.188 SPDK Configuration: 00:10:30.188 Core mask: 0x1 00:10:30.188 00:10:30.188 Accel Perf Configuration: 00:10:30.188 Workload Type: xor 00:10:30.188 Source buffers: 3 00:10:30.188 Transfer size: 4096 bytes 00:10:30.188 Vector count 1 00:10:30.188 Module: software 00:10:30.188 Queue depth: 32 00:10:30.188 Allocate depth: 32 00:10:30.188 # threads/core: 1 00:10:30.188 Run time: 1 seconds 00:10:30.188 Verify: Yes 00:10:30.188 00:10:30.188 Running for 1 seconds... 00:10:30.188 00:10:30.188 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:30.188 ------------------------------------------------------------------------------------ 00:10:30.188 0,0 215520/s 841 MiB/s 0 0 00:10:30.188 ==================================================================================== 00:10:30.188 Total 215520/s 841 MiB/s 0 0' 00:10:30.188 04:50:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.188 04:50:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:30.188 04:50:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.188 04:50:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.188 04:50:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:30.188 04:50:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.188 04:50:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.188 04:50:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.188 04:50:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.188 04:50:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.188 04:50:53 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.188 04:50:53 -- accel/accel.sh@42 -- # jq -r . 00:10:30.448 [2024-11-18 04:50:53.739520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:30.448 [2024-11-18 04:50:53.739672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64306 ] 00:10:30.448 [2024-11-18 04:50:53.907179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.707 [2024-11-18 04:50:54.066859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.707 04:50:54 -- accel/accel.sh@21 -- # val= 00:10:30.707 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.707 04:50:54 -- accel/accel.sh@21 -- # val= 00:10:30.707 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.707 04:50:54 -- accel/accel.sh@21 -- # val=0x1 00:10:30.707 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.707 04:50:54 -- accel/accel.sh@21 -- # val= 00:10:30.707 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.707 04:50:54 -- accel/accel.sh@21 -- # val= 00:10:30.707 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.707 04:50:54 -- accel/accel.sh@21 -- # val=xor 00:10:30.707 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.707 04:50:54 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.707 04:50:54 -- accel/accel.sh@21 -- # val=3 00:10:30.707 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.707 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.966 04:50:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:30.966 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.966 04:50:54 -- accel/accel.sh@21 -- # val= 00:10:30.966 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.966 04:50:54 -- accel/accel.sh@21 -- # val=software 00:10:30.966 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.966 04:50:54 -- accel/accel.sh@23 -- # accel_module=software 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.966 04:50:54 -- accel/accel.sh@21 -- # val=32 00:10:30.966 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.966 04:50:54 -- accel/accel.sh@21 -- # val=32 00:10:30.966 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.966 04:50:54 -- accel/accel.sh@21 -- # val=1 00:10:30.966 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.966 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.967 04:50:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:30.967 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.967 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.967 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.967 04:50:54 -- accel/accel.sh@21 -- # val=Yes 00:10:30.967 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.967 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.967 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.967 04:50:54 -- accel/accel.sh@21 -- # val= 00:10:30.967 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.967 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.967 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:30.967 04:50:54 -- accel/accel.sh@21 -- # val= 00:10:30.967 04:50:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.967 04:50:54 -- accel/accel.sh@20 -- # IFS=: 00:10:30.967 04:50:54 -- accel/accel.sh@20 -- # read -r var val 00:10:32.895 04:50:55 -- accel/accel.sh@21 -- # val= 00:10:32.895 04:50:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # IFS=: 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # read -r var val 00:10:32.895 04:50:55 -- accel/accel.sh@21 -- # val= 00:10:32.895 04:50:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # IFS=: 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # read -r var val 00:10:32.895 04:50:55 -- accel/accel.sh@21 -- # val= 00:10:32.895 04:50:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # IFS=: 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # read -r var val 00:10:32.895 04:50:55 -- accel/accel.sh@21 -- # val= 00:10:32.895 04:50:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # IFS=: 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # read -r var val 00:10:32.895 04:50:55 -- accel/accel.sh@21 -- # val= 00:10:32.895 04:50:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # IFS=: 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # read -r var val 00:10:32.895 04:50:55 -- accel/accel.sh@21 -- # val= 00:10:32.895 04:50:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # IFS=: 00:10:32.895 04:50:55 -- accel/accel.sh@20 -- # read -r var val 00:10:32.895 04:50:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:32.895 04:50:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:32.895 04:50:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.895 00:10:32.895 real 0m4.577s 00:10:32.895 user 0m4.064s 00:10:32.895 sys 0m0.328s 00:10:32.895 04:50:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.895 04:50:55 -- common/autotest_common.sh@10 -- # set +x 00:10:32.895 ************************************ 00:10:32.895 END TEST accel_xor 00:10:32.895 ************************************ 00:10:32.895 04:50:56 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:32.895 04:50:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:32.895 04:50:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.895 04:50:56 -- common/autotest_common.sh@10 -- # set +x 00:10:32.895 ************************************ 00:10:32.895 START TEST accel_dif_verify 00:10:32.895 ************************************ 00:10:32.895 04:50:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:10:32.895 04:50:56 -- accel/accel.sh@16 -- # local accel_opc 00:10:32.895 04:50:56 -- accel/accel.sh@17 -- # local accel_module 00:10:32.895 04:50:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:32.895 04:50:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:32.895 04:50:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.895 04:50:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.895 04:50:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.895 04:50:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.895 04:50:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.895 04:50:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.895 04:50:56 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.895 04:50:56 -- accel/accel.sh@42 -- # jq -r . 00:10:32.895 [2024-11-18 04:50:56.083006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.895 [2024-11-18 04:50:56.083163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64347 ] 00:10:32.895 [2024-11-18 04:50:56.248936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.895 [2024-11-18 04:50:56.407677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.431 04:50:58 -- accel/accel.sh@18 -- # out=' 00:10:35.432 SPDK Configuration: 00:10:35.432 Core mask: 0x1 00:10:35.432 00:10:35.432 Accel Perf Configuration: 00:10:35.432 Workload Type: dif_verify 00:10:35.432 Vector size: 4096 bytes 00:10:35.432 Transfer size: 4096 bytes 00:10:35.432 Block size: 512 bytes 00:10:35.432 Metadata size: 8 bytes 00:10:35.432 Vector count 1 00:10:35.432 Module: software 00:10:35.432 Queue depth: 32 00:10:35.432 Allocate depth: 32 00:10:35.432 # threads/core: 1 00:10:35.432 Run time: 1 seconds 00:10:35.432 Verify: No 00:10:35.432 00:10:35.432 Running for 1 seconds... 00:10:35.432 00:10:35.432 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:35.432 ------------------------------------------------------------------------------------ 00:10:35.432 0,0 103456/s 410 MiB/s 0 0 00:10:35.432 ==================================================================================== 00:10:35.432 Total 103456/s 404 MiB/s 0 0' 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:35.432 04:50:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:35.432 04:50:58 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.432 04:50:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.432 04:50:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.432 04:50:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.432 04:50:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.432 04:50:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.432 04:50:58 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.432 04:50:58 -- accel/accel.sh@42 -- # jq -r . 00:10:35.432 [2024-11-18 04:50:58.374757] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:35.432 [2024-11-18 04:50:58.374913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64379 ] 00:10:35.432 [2024-11-18 04:50:58.542807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.432 [2024-11-18 04:50:58.699595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val= 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val= 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val=0x1 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val= 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val= 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val=dif_verify 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val= 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val=software 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@23 -- # accel_module=software 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val=32 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val=32 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val=1 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val=No 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val= 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:35.432 04:50:58 -- accel/accel.sh@21 -- # val= 00:10:35.432 04:50:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # IFS=: 00:10:35.432 04:50:58 -- accel/accel.sh@20 -- # read -r var val 00:10:37.335 04:51:00 -- accel/accel.sh@21 -- # val= 00:10:37.335 04:51:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # IFS=: 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # read -r var val 00:10:37.335 04:51:00 -- accel/accel.sh@21 -- # val= 00:10:37.335 04:51:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # IFS=: 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # read -r var val 00:10:37.335 04:51:00 -- accel/accel.sh@21 -- # val= 00:10:37.335 04:51:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # IFS=: 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # read -r var val 00:10:37.335 04:51:00 -- accel/accel.sh@21 -- # val= 00:10:37.335 04:51:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # IFS=: 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # read -r var val 00:10:37.335 04:51:00 -- accel/accel.sh@21 -- # val= 00:10:37.335 04:51:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # IFS=: 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # read -r var val 00:10:37.335 04:51:00 -- accel/accel.sh@21 -- # val= 00:10:37.335 04:51:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # IFS=: 00:10:37.335 04:51:00 -- accel/accel.sh@20 -- # read -r var val 00:10:37.335 ************************************ 00:10:37.335 END TEST accel_dif_verify 00:10:37.335 ************************************ 00:10:37.335 04:51:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:37.335 04:51:00 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:37.335 04:51:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:37.335 00:10:37.335 real 0m4.614s 00:10:37.335 user 0m4.132s 00:10:37.335 sys 0m0.298s 00:10:37.335 04:51:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:37.335 04:51:00 -- common/autotest_common.sh@10 -- # set +x 00:10:37.335 04:51:00 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:37.335 04:51:00 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:37.335 04:51:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:37.335 04:51:00 -- common/autotest_common.sh@10 -- # set +x 00:10:37.335 ************************************ 00:10:37.335 START TEST accel_dif_generate 00:10:37.335 ************************************ 00:10:37.335 04:51:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:10:37.335 04:51:00 -- accel/accel.sh@16 -- # local accel_opc 00:10:37.335 04:51:00 -- accel/accel.sh@17 -- # local accel_module 00:10:37.335 04:51:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:37.335 04:51:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:37.335 04:51:00 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.335 04:51:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.335 04:51:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.335 04:51:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.335 04:51:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.335 04:51:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.335 04:51:00 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.335 04:51:00 -- accel/accel.sh@42 -- # jq -r . 00:10:37.335 [2024-11-18 04:51:00.750772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:37.335 [2024-11-18 04:51:00.750927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64421 ] 00:10:37.595 [2024-11-18 04:51:00.921280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.595 [2024-11-18 04:51:01.091477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.131 04:51:03 -- accel/accel.sh@18 -- # out=' 00:10:40.131 SPDK Configuration: 00:10:40.131 Core mask: 0x1 00:10:40.131 00:10:40.131 Accel Perf Configuration: 00:10:40.131 Workload Type: dif_generate 00:10:40.131 Vector size: 4096 bytes 00:10:40.131 Transfer size: 4096 bytes 00:10:40.131 Block size: 512 bytes 00:10:40.131 Metadata size: 8 bytes 00:10:40.131 Vector count 1 00:10:40.131 Module: software 00:10:40.131 Queue depth: 32 00:10:40.131 Allocate depth: 32 00:10:40.131 # threads/core: 1 00:10:40.131 Run time: 1 seconds 00:10:40.131 Verify: No 00:10:40.131 00:10:40.131 Running for 1 seconds... 00:10:40.131 00:10:40.131 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:40.131 ------------------------------------------------------------------------------------ 00:10:40.131 0,0 118144/s 468 MiB/s 0 0 00:10:40.131 ==================================================================================== 00:10:40.131 Total 118144/s 461 MiB/s 0 0' 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:40.131 04:51:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.131 04:51:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.131 04:51:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.131 04:51:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.131 04:51:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.131 04:51:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.131 04:51:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.131 04:51:03 -- accel/accel.sh@42 -- # jq -r . 00:10:40.131 [2024-11-18 04:51:03.093748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:40.131 [2024-11-18 04:51:03.093877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64447 ] 00:10:40.131 [2024-11-18 04:51:03.248351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.131 [2024-11-18 04:51:03.429445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val= 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val= 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val=0x1 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val= 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val= 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val=dif_generate 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val= 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val=software 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@23 -- # accel_module=software 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val=32 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val=32 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val=1 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.131 04:51:03 -- accel/accel.sh@21 -- # val=No 00:10:40.131 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.131 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.132 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.132 04:51:03 -- accel/accel.sh@21 -- # val= 00:10:40.132 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.132 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.132 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.132 04:51:03 -- accel/accel.sh@21 -- # val= 00:10:40.132 04:51:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.132 04:51:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.132 04:51:03 -- accel/accel.sh@20 -- # read -r var val 00:10:42.035 04:51:05 -- accel/accel.sh@21 -- # val= 00:10:42.035 04:51:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # IFS=: 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # read -r var val 00:10:42.035 04:51:05 -- accel/accel.sh@21 -- # val= 00:10:42.035 04:51:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # IFS=: 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # read -r var val 00:10:42.035 04:51:05 -- accel/accel.sh@21 -- # val= 00:10:42.035 04:51:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # IFS=: 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # read -r var val 00:10:42.035 04:51:05 -- accel/accel.sh@21 -- # val= 00:10:42.035 04:51:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # IFS=: 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # read -r var val 00:10:42.035 04:51:05 -- accel/accel.sh@21 -- # val= 00:10:42.035 04:51:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # IFS=: 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # read -r var val 00:10:42.035 04:51:05 -- accel/accel.sh@21 -- # val= 00:10:42.035 04:51:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # IFS=: 00:10:42.035 04:51:05 -- accel/accel.sh@20 -- # read -r var val 00:10:42.035 ************************************ 00:10:42.035 END TEST accel_dif_generate 00:10:42.035 ************************************ 00:10:42.035 04:51:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:42.035 04:51:05 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:42.035 04:51:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.035 00:10:42.035 real 0m4.650s 00:10:42.035 user 0m4.145s 00:10:42.035 sys 0m0.319s 00:10:42.035 04:51:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.035 04:51:05 -- common/autotest_common.sh@10 -- # set +x 00:10:42.035 04:51:05 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:42.035 04:51:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:42.035 04:51:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.035 04:51:05 -- common/autotest_common.sh@10 -- # set +x 00:10:42.035 ************************************ 00:10:42.035 START TEST accel_dif_generate_copy 00:10:42.035 ************************************ 00:10:42.035 04:51:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:10:42.035 04:51:05 -- accel/accel.sh@16 -- # local accel_opc 00:10:42.035 04:51:05 -- accel/accel.sh@17 -- # local accel_module 00:10:42.035 04:51:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:42.035 04:51:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:42.035 04:51:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.035 04:51:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.036 04:51:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.036 04:51:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.036 04:51:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.036 04:51:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.036 04:51:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.036 04:51:05 -- accel/accel.sh@42 -- # jq -r . 00:10:42.036 [2024-11-18 04:51:05.442246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:42.036 [2024-11-18 04:51:05.442576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64499 ] 00:10:42.294 [2024-11-18 04:51:05.600126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.294 [2024-11-18 04:51:05.774384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.198 04:51:07 -- accel/accel.sh@18 -- # out=' 00:10:44.198 SPDK Configuration: 00:10:44.198 Core mask: 0x1 00:10:44.198 00:10:44.198 Accel Perf Configuration: 00:10:44.198 Workload Type: dif_generate_copy 00:10:44.198 Vector size: 4096 bytes 00:10:44.198 Transfer size: 4096 bytes 00:10:44.198 Vector count 1 00:10:44.198 Module: software 00:10:44.198 Queue depth: 32 00:10:44.198 Allocate depth: 32 00:10:44.198 # threads/core: 1 00:10:44.198 Run time: 1 seconds 00:10:44.198 Verify: No 00:10:44.198 00:10:44.198 Running for 1 seconds... 00:10:44.198 00:10:44.198 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:44.198 ------------------------------------------------------------------------------------ 00:10:44.198 0,0 90208/s 357 MiB/s 0 0 00:10:44.198 ==================================================================================== 00:10:44.198 Total 90208/s 352 MiB/s 0 0' 00:10:44.198 04:51:07 -- accel/accel.sh@20 -- # IFS=: 00:10:44.198 04:51:07 -- accel/accel.sh@20 -- # read -r var val 00:10:44.198 04:51:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:44.198 04:51:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:44.198 04:51:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.198 04:51:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.198 04:51:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.198 04:51:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.198 04:51:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.198 04:51:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.198 04:51:07 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.198 04:51:07 -- accel/accel.sh@42 -- # jq -r . 00:10:44.457 [2024-11-18 04:51:07.740353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:44.457 [2024-11-18 04:51:07.740509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64525 ] 00:10:44.457 [2024-11-18 04:51:07.911105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.716 [2024-11-18 04:51:08.077042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.975 04:51:08 -- accel/accel.sh@21 -- # val= 00:10:44.975 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.975 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val= 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val=0x1 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val= 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val= 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val= 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val=software 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@23 -- # accel_module=software 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val=32 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val=32 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val=1 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val=No 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val= 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:44.976 04:51:08 -- accel/accel.sh@21 -- # val= 00:10:44.976 04:51:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # IFS=: 00:10:44.976 04:51:08 -- accel/accel.sh@20 -- # read -r var val 00:10:46.881 04:51:09 -- accel/accel.sh@21 -- # val= 00:10:46.881 04:51:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # IFS=: 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # read -r var val 00:10:46.881 04:51:09 -- accel/accel.sh@21 -- # val= 00:10:46.881 04:51:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # IFS=: 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # read -r var val 00:10:46.881 04:51:09 -- accel/accel.sh@21 -- # val= 00:10:46.881 04:51:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # IFS=: 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # read -r var val 00:10:46.881 04:51:09 -- accel/accel.sh@21 -- # val= 00:10:46.881 04:51:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # IFS=: 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # read -r var val 00:10:46.881 04:51:09 -- accel/accel.sh@21 -- # val= 00:10:46.881 04:51:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # IFS=: 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # read -r var val 00:10:46.881 04:51:09 -- accel/accel.sh@21 -- # val= 00:10:46.881 04:51:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # IFS=: 00:10:46.881 04:51:09 -- accel/accel.sh@20 -- # read -r var val 00:10:46.881 04:51:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:46.881 04:51:09 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:46.881 04:51:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:46.881 00:10:46.881 real 0m4.576s 00:10:46.881 user 0m4.082s 00:10:46.881 sys 0m0.311s 00:10:46.881 04:51:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:46.881 ************************************ 00:10:46.881 END TEST accel_dif_generate_copy 00:10:46.881 ************************************ 00:10:46.881 04:51:09 -- common/autotest_common.sh@10 -- # set +x 00:10:46.881 04:51:10 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:46.881 04:51:10 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:46.881 04:51:10 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:46.881 04:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.881 04:51:10 -- common/autotest_common.sh@10 -- # set +x 00:10:46.881 ************************************ 00:10:46.881 START TEST accel_comp 00:10:46.881 ************************************ 00:10:46.881 04:51:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:46.881 04:51:10 -- accel/accel.sh@16 -- # local accel_opc 00:10:46.881 04:51:10 -- accel/accel.sh@17 -- # local accel_module 00:10:46.881 04:51:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:46.881 04:51:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:46.881 04:51:10 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.881 04:51:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.881 04:51:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.881 04:51:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.881 04:51:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.881 04:51:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.881 04:51:10 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.881 04:51:10 -- accel/accel.sh@42 -- # jq -r . 00:10:46.881 [2024-11-18 04:51:10.091132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:46.881 [2024-11-18 04:51:10.091383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64566 ] 00:10:46.881 [2024-11-18 04:51:10.288523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.140 [2024-11-18 04:51:10.452758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.078 04:51:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:49.078 00:10:49.078 SPDK Configuration: 00:10:49.078 Core mask: 0x1 00:10:49.078 00:10:49.078 Accel Perf Configuration: 00:10:49.078 Workload Type: compress 00:10:49.078 Transfer size: 4096 bytes 00:10:49.078 Vector count 1 00:10:49.078 Module: software 00:10:49.078 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.078 Queue depth: 32 00:10:49.078 Allocate depth: 32 00:10:49.078 # threads/core: 1 00:10:49.078 Run time: 1 seconds 00:10:49.078 Verify: No 00:10:49.078 00:10:49.078 Running for 1 seconds... 00:10:49.078 00:10:49.078 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:49.078 ------------------------------------------------------------------------------------ 00:10:49.078 0,0 51808/s 216 MiB/s 0 0 00:10:49.078 ==================================================================================== 00:10:49.078 Total 51808/s 202 MiB/s 0 0' 00:10:49.078 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.078 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.078 04:51:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.078 04:51:12 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.078 04:51:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.078 04:51:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:49.078 04:51:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.078 04:51:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.078 04:51:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:49.078 04:51:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:49.078 04:51:12 -- accel/accel.sh@41 -- # local IFS=, 00:10:49.078 04:51:12 -- accel/accel.sh@42 -- # jq -r . 00:10:49.078 [2024-11-18 04:51:12.421214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:49.078 [2024-11-18 04:51:12.421380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64596 ] 00:10:49.078 [2024-11-18 04:51:12.592044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.338 [2024-11-18 04:51:12.749609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.597 04:51:12 -- accel/accel.sh@21 -- # val= 00:10:49.597 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.597 04:51:12 -- accel/accel.sh@21 -- # val= 00:10:49.597 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.597 04:51:12 -- accel/accel.sh@21 -- # val= 00:10:49.597 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.597 04:51:12 -- accel/accel.sh@21 -- # val=0x1 00:10:49.597 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.597 04:51:12 -- accel/accel.sh@21 -- # val= 00:10:49.597 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.597 04:51:12 -- accel/accel.sh@21 -- # val= 00:10:49.597 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.597 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.597 04:51:12 -- accel/accel.sh@21 -- # val=compress 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val= 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val=software 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@23 -- # accel_module=software 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val=32 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val=32 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val=1 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val=No 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val= 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:49.598 04:51:12 -- accel/accel.sh@21 -- # val= 00:10:49.598 04:51:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # IFS=: 00:10:49.598 04:51:12 -- accel/accel.sh@20 -- # read -r var val 00:10:51.504 04:51:14 -- accel/accel.sh@21 -- # val= 00:10:51.504 04:51:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # IFS=: 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # read -r var val 00:10:51.504 04:51:14 -- accel/accel.sh@21 -- # val= 00:10:51.504 04:51:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # IFS=: 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # read -r var val 00:10:51.504 04:51:14 -- accel/accel.sh@21 -- # val= 00:10:51.504 04:51:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # IFS=: 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # read -r var val 00:10:51.504 04:51:14 -- accel/accel.sh@21 -- # val= 00:10:51.504 04:51:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # IFS=: 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # read -r var val 00:10:51.504 04:51:14 -- accel/accel.sh@21 -- # val= 00:10:51.504 04:51:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # IFS=: 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # read -r var val 00:10:51.504 04:51:14 -- accel/accel.sh@21 -- # val= 00:10:51.504 04:51:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # IFS=: 00:10:51.504 04:51:14 -- accel/accel.sh@20 -- # read -r var val 00:10:51.504 04:51:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:51.504 ************************************ 00:10:51.504 END TEST accel_comp 00:10:51.504 ************************************ 00:10:51.504 04:51:14 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:51.504 04:51:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:51.504 00:10:51.504 real 0m4.635s 00:10:51.504 user 0m4.081s 00:10:51.504 sys 0m0.371s 00:10:51.504 04:51:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:51.504 04:51:14 -- common/autotest_common.sh@10 -- # set +x 00:10:51.504 04:51:14 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.504 04:51:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:51.504 04:51:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.504 04:51:14 -- common/autotest_common.sh@10 -- # set +x 00:10:51.504 ************************************ 00:10:51.504 START TEST accel_decomp 00:10:51.504 ************************************ 00:10:51.504 04:51:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.504 04:51:14 -- accel/accel.sh@16 -- # local accel_opc 00:10:51.504 04:51:14 -- accel/accel.sh@17 -- # local accel_module 00:10:51.504 04:51:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.504 04:51:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.504 04:51:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:51.504 04:51:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:51.504 04:51:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.504 04:51:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.504 04:51:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:51.504 04:51:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:51.504 04:51:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:51.504 04:51:14 -- accel/accel.sh@42 -- # jq -r . 00:10:51.504 [2024-11-18 04:51:14.767688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:51.504 [2024-11-18 04:51:14.767834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64644 ] 00:10:51.504 [2024-11-18 04:51:14.937371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.764 [2024-11-18 04:51:15.095387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.668 04:51:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:53.668 00:10:53.668 SPDK Configuration: 00:10:53.668 Core mask: 0x1 00:10:53.668 00:10:53.668 Accel Perf Configuration: 00:10:53.668 Workload Type: decompress 00:10:53.668 Transfer size: 4096 bytes 00:10:53.668 Vector count 1 00:10:53.669 Module: software 00:10:53.669 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:53.669 Queue depth: 32 00:10:53.669 Allocate depth: 32 00:10:53.669 # threads/core: 1 00:10:53.669 Run time: 1 seconds 00:10:53.669 Verify: Yes 00:10:53.669 00:10:53.669 Running for 1 seconds... 00:10:53.669 00:10:53.669 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:53.669 ------------------------------------------------------------------------------------ 00:10:53.669 0,0 67008/s 123 MiB/s 0 0 00:10:53.669 ==================================================================================== 00:10:53.669 Total 67008/s 261 MiB/s 0 0' 00:10:53.669 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:53.669 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:53.669 04:51:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:53.669 04:51:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:53.669 04:51:17 -- accel/accel.sh@12 -- # build_accel_config 00:10:53.669 04:51:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:53.669 04:51:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.669 04:51:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.669 04:51:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:53.669 04:51:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:53.669 04:51:17 -- accel/accel.sh@41 -- # local IFS=, 00:10:53.669 04:51:17 -- accel/accel.sh@42 -- # jq -r . 00:10:53.669 [2024-11-18 04:51:17.062551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:53.669 [2024-11-18 04:51:17.062701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64670 ] 00:10:53.927 [2024-11-18 04:51:17.232812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.927 [2024-11-18 04:51:17.395602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val= 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val= 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val= 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val=0x1 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val= 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val= 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val=decompress 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val= 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val=software 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@23 -- # accel_module=software 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val=32 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val=32 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val=1 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val=Yes 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val= 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:54.186 04:51:17 -- accel/accel.sh@21 -- # val= 00:10:54.186 04:51:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # IFS=: 00:10:54.186 04:51:17 -- accel/accel.sh@20 -- # read -r var val 00:10:56.092 04:51:19 -- accel/accel.sh@21 -- # val= 00:10:56.092 04:51:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # IFS=: 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # read -r var val 00:10:56.092 04:51:19 -- accel/accel.sh@21 -- # val= 00:10:56.092 04:51:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # IFS=: 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # read -r var val 00:10:56.092 04:51:19 -- accel/accel.sh@21 -- # val= 00:10:56.092 04:51:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # IFS=: 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # read -r var val 00:10:56.092 04:51:19 -- accel/accel.sh@21 -- # val= 00:10:56.092 04:51:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # IFS=: 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # read -r var val 00:10:56.092 04:51:19 -- accel/accel.sh@21 -- # val= 00:10:56.092 04:51:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # IFS=: 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # read -r var val 00:10:56.092 04:51:19 -- accel/accel.sh@21 -- # val= 00:10:56.092 04:51:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # IFS=: 00:10:56.092 04:51:19 -- accel/accel.sh@20 -- # read -r var val 00:10:56.092 04:51:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:56.092 04:51:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:56.092 04:51:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:56.092 00:10:56.092 real 0m4.598s 00:10:56.092 user 0m4.101s 00:10:56.092 sys 0m0.315s 00:10:56.092 04:51:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:56.092 04:51:19 -- common/autotest_common.sh@10 -- # set +x 00:10:56.092 ************************************ 00:10:56.092 END TEST accel_decomp 00:10:56.092 ************************************ 00:10:56.092 04:51:19 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:56.092 04:51:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:56.092 04:51:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.092 04:51:19 -- common/autotest_common.sh@10 -- # set +x 00:10:56.092 ************************************ 00:10:56.092 START TEST accel_decmop_full 00:10:56.092 ************************************ 00:10:56.092 04:51:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:56.092 04:51:19 -- accel/accel.sh@16 -- # local accel_opc 00:10:56.092 04:51:19 -- accel/accel.sh@17 -- # local accel_module 00:10:56.092 04:51:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:56.092 04:51:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:56.092 04:51:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:56.092 04:51:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:56.092 04:51:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:56.092 04:51:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:56.092 04:51:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:56.092 04:51:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:56.092 04:51:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:56.092 04:51:19 -- accel/accel.sh@42 -- # jq -r . 00:10:56.092 [2024-11-18 04:51:19.417232] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:56.092 [2024-11-18 04:51:19.417538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64711 ] 00:10:56.092 [2024-11-18 04:51:19.573371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.351 [2024-11-18 04:51:19.742095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.256 04:51:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:58.256 00:10:58.256 SPDK Configuration: 00:10:58.256 Core mask: 0x1 00:10:58.256 00:10:58.256 Accel Perf Configuration: 00:10:58.256 Workload Type: decompress 00:10:58.256 Transfer size: 111250 bytes 00:10:58.256 Vector count 1 00:10:58.256 Module: software 00:10:58.256 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:58.256 Queue depth: 32 00:10:58.256 Allocate depth: 32 00:10:58.256 # threads/core: 1 00:10:58.256 Run time: 1 seconds 00:10:58.256 Verify: Yes 00:10:58.256 00:10:58.256 Running for 1 seconds... 00:10:58.256 00:10:58.256 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:58.256 ------------------------------------------------------------------------------------ 00:10:58.256 0,0 4960/s 204 MiB/s 0 0 00:10:58.256 ==================================================================================== 00:10:58.256 Total 4960/s 526 MiB/s 0 0' 00:10:58.256 04:51:21 -- accel/accel.sh@20 -- # IFS=: 00:10:58.256 04:51:21 -- accel/accel.sh@20 -- # read -r var val 00:10:58.256 04:51:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:58.256 04:51:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:58.256 04:51:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.256 04:51:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.256 04:51:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.256 04:51:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.256 04:51:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.256 04:51:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.256 04:51:21 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.256 04:51:21 -- accel/accel.sh@42 -- # jq -r . 00:10:58.256 [2024-11-18 04:51:21.739639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:58.256 [2024-11-18 04:51:21.739776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64743 ] 00:10:58.515 [2024-11-18 04:51:21.909451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.774 [2024-11-18 04:51:22.068482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.774 04:51:22 -- accel/accel.sh@21 -- # val= 00:10:58.774 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 04:51:22 -- accel/accel.sh@21 -- # val= 00:10:58.774 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 04:51:22 -- accel/accel.sh@21 -- # val= 00:10:58.774 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val=0x1 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val= 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val= 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val=decompress 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val= 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val=software 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@23 -- # accel_module=software 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val=32 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val=32 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val=1 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val=Yes 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val= 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:10:58.775 04:51:22 -- accel/accel.sh@21 -- # val= 00:10:58.775 04:51:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # IFS=: 00:10:58.775 04:51:22 -- accel/accel.sh@20 -- # read -r var val 00:11:00.680 04:51:24 -- accel/accel.sh@21 -- # val= 00:11:00.680 04:51:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # IFS=: 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # read -r var val 00:11:00.680 04:51:24 -- accel/accel.sh@21 -- # val= 00:11:00.680 04:51:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # IFS=: 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # read -r var val 00:11:00.680 04:51:24 -- accel/accel.sh@21 -- # val= 00:11:00.680 04:51:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # IFS=: 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # read -r var val 00:11:00.680 04:51:24 -- accel/accel.sh@21 -- # val= 00:11:00.680 04:51:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # IFS=: 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # read -r var val 00:11:00.680 04:51:24 -- accel/accel.sh@21 -- # val= 00:11:00.680 04:51:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # IFS=: 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # read -r var val 00:11:00.680 04:51:24 -- accel/accel.sh@21 -- # val= 00:11:00.680 04:51:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # IFS=: 00:11:00.680 04:51:24 -- accel/accel.sh@20 -- # read -r var val 00:11:00.680 04:51:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:00.680 04:51:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:00.680 04:51:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:00.680 00:11:00.680 real 0m4.643s 00:11:00.680 user 0m4.144s 00:11:00.680 sys 0m0.315s 00:11:00.680 ************************************ 00:11:00.680 END TEST accel_decmop_full 00:11:00.680 ************************************ 00:11:00.680 04:51:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:00.680 04:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:00.680 04:51:24 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:00.680 04:51:24 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:00.680 04:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.680 04:51:24 -- common/autotest_common.sh@10 -- # set +x 00:11:00.680 ************************************ 00:11:00.680 START TEST accel_decomp_mcore 00:11:00.680 ************************************ 00:11:00.680 04:51:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:00.680 04:51:24 -- accel/accel.sh@16 -- # local accel_opc 00:11:00.680 04:51:24 -- accel/accel.sh@17 -- # local accel_module 00:11:00.680 04:51:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:00.680 04:51:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:00.680 04:51:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:00.680 04:51:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:00.680 04:51:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.680 04:51:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.680 04:51:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:00.680 04:51:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:00.680 04:51:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:00.680 04:51:24 -- accel/accel.sh@42 -- # jq -r . 00:11:00.680 [2024-11-18 04:51:24.113622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:00.680 [2024-11-18 04:51:24.113813] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64784 ] 00:11:00.938 [2024-11-18 04:51:24.282928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.938 [2024-11-18 04:51:24.454113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.938 [2024-11-18 04:51:24.455183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.938 [2024-11-18 04:51:24.455352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.938 [2024-11-18 04:51:24.455364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.475 04:51:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:03.475 00:11:03.475 SPDK Configuration: 00:11:03.475 Core mask: 0xf 00:11:03.475 00:11:03.475 Accel Perf Configuration: 00:11:03.475 Workload Type: decompress 00:11:03.475 Transfer size: 4096 bytes 00:11:03.475 Vector count 1 00:11:03.475 Module: software 00:11:03.475 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:03.475 Queue depth: 32 00:11:03.475 Allocate depth: 32 00:11:03.475 # threads/core: 1 00:11:03.475 Run time: 1 seconds 00:11:03.475 Verify: Yes 00:11:03.475 00:11:03.475 Running for 1 seconds... 00:11:03.475 00:11:03.475 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:03.475 ------------------------------------------------------------------------------------ 00:11:03.475 0,0 54848/s 101 MiB/s 0 0 00:11:03.475 3,0 55392/s 102 MiB/s 0 0 00:11:03.475 2,0 55232/s 101 MiB/s 0 0 00:11:03.475 1,0 55712/s 102 MiB/s 0 0 00:11:03.475 ==================================================================================== 00:11:03.475 Total 221184/s 864 MiB/s 0 0' 00:11:03.475 04:51:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.475 04:51:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.475 04:51:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:03.475 04:51:26 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.475 04:51:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:03.475 04:51:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.475 04:51:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.475 04:51:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.475 04:51:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.475 04:51:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.475 04:51:26 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.475 04:51:26 -- accel/accel.sh@42 -- # jq -r . 00:11:03.475 [2024-11-18 04:51:26.501434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:03.475 [2024-11-18 04:51:26.501785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64813 ] 00:11:03.475 [2024-11-18 04:51:26.676073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.475 [2024-11-18 04:51:26.856926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.475 [2024-11-18 04:51:26.857078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.475 [2024-11-18 04:51:26.857235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.475 [2024-11-18 04:51:26.857294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.734 04:51:27 -- accel/accel.sh@21 -- # val= 00:11:03.734 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.734 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.734 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.734 04:51:27 -- accel/accel.sh@21 -- # val= 00:11:03.734 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.734 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.734 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.734 04:51:27 -- accel/accel.sh@21 -- # val= 00:11:03.734 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.734 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.734 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.734 04:51:27 -- accel/accel.sh@21 -- # val=0xf 00:11:03.734 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val= 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val= 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val=decompress 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val= 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val=software 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@23 -- # accel_module=software 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val=32 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val=32 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val=1 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val=Yes 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val= 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 04:51:27 -- accel/accel.sh@21 -- # val= 00:11:03.735 04:51:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 04:51:27 -- accel/accel.sh@20 -- # read -r var val 00:11:05.639 04:51:28 -- accel/accel.sh@21 -- # val= 00:11:05.639 04:51:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # IFS=: 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # read -r var val 00:11:05.639 04:51:28 -- accel/accel.sh@21 -- # val= 00:11:05.639 04:51:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # IFS=: 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # read -r var val 00:11:05.639 04:51:28 -- accel/accel.sh@21 -- # val= 00:11:05.639 04:51:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # IFS=: 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # read -r var val 00:11:05.639 04:51:28 -- accel/accel.sh@21 -- # val= 00:11:05.639 04:51:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # IFS=: 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # read -r var val 00:11:05.639 04:51:28 -- accel/accel.sh@21 -- # val= 00:11:05.639 04:51:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # IFS=: 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # read -r var val 00:11:05.639 04:51:28 -- accel/accel.sh@21 -- # val= 00:11:05.639 04:51:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # IFS=: 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # read -r var val 00:11:05.639 04:51:28 -- accel/accel.sh@21 -- # val= 00:11:05.639 04:51:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # IFS=: 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # read -r var val 00:11:05.639 04:51:28 -- accel/accel.sh@21 -- # val= 00:11:05.639 04:51:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # IFS=: 00:11:05.639 04:51:28 -- accel/accel.sh@20 -- # read -r var val 00:11:05.639 04:51:28 -- accel/accel.sh@21 -- # val= 00:11:05.640 04:51:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.640 04:51:28 -- accel/accel.sh@20 -- # IFS=: 00:11:05.640 04:51:28 -- accel/accel.sh@20 -- # read -r var val 00:11:05.640 04:51:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:05.640 04:51:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:05.640 04:51:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:05.640 00:11:05.640 real 0m4.784s 00:11:05.640 user 0m7.022s 00:11:05.640 sys 0m0.202s 00:11:05.640 04:51:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:05.640 ************************************ 00:11:05.640 END TEST accel_decomp_mcore 00:11:05.640 ************************************ 00:11:05.640 04:51:28 -- common/autotest_common.sh@10 -- # set +x 00:11:05.640 04:51:28 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:05.640 04:51:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:05.640 04:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.640 04:51:28 -- common/autotest_common.sh@10 -- # set +x 00:11:05.640 ************************************ 00:11:05.640 START TEST accel_decomp_full_mcore 00:11:05.640 ************************************ 00:11:05.640 04:51:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:05.640 04:51:28 -- accel/accel.sh@16 -- # local accel_opc 00:11:05.640 04:51:28 -- accel/accel.sh@17 -- # local accel_module 00:11:05.640 04:51:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:05.640 04:51:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:05.640 04:51:28 -- accel/accel.sh@12 -- # build_accel_config 00:11:05.640 04:51:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.640 04:51:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.640 04:51:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.640 04:51:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.640 04:51:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.640 04:51:28 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.640 04:51:28 -- accel/accel.sh@42 -- # jq -r . 00:11:05.640 [2024-11-18 04:51:28.957919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:05.640 [2024-11-18 04:51:28.958075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64863 ] 00:11:05.640 [2024-11-18 04:51:29.130422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.907 [2024-11-18 04:51:29.294874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.907 [2024-11-18 04:51:29.295023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.907 [2024-11-18 04:51:29.295138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.907 [2024-11-18 04:51:29.295353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.812 04:51:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:07.812 00:11:07.812 SPDK Configuration: 00:11:07.812 Core mask: 0xf 00:11:07.812 00:11:07.812 Accel Perf Configuration: 00:11:07.812 Workload Type: decompress 00:11:07.812 Transfer size: 111250 bytes 00:11:07.812 Vector count 1 00:11:07.812 Module: software 00:11:07.812 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:07.812 Queue depth: 32 00:11:07.812 Allocate depth: 32 00:11:07.812 # threads/core: 1 00:11:07.812 Run time: 1 seconds 00:11:07.812 Verify: Yes 00:11:07.812 00:11:07.812 Running for 1 seconds... 00:11:07.812 00:11:07.812 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:07.812 ------------------------------------------------------------------------------------ 00:11:07.812 0,0 4544/s 187 MiB/s 0 0 00:11:07.812 3,0 4160/s 171 MiB/s 0 0 00:11:07.812 2,0 4512/s 186 MiB/s 0 0 00:11:07.812 1,0 4544/s 187 MiB/s 0 0 00:11:07.812 ==================================================================================== 00:11:07.812 Total 17760/s 1884 MiB/s 0 0' 00:11:07.812 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:07.812 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:07.812 04:51:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:07.812 04:51:31 -- accel/accel.sh@12 -- # build_accel_config 00:11:07.812 04:51:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:07.812 04:51:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:07.812 04:51:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.812 04:51:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.812 04:51:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:07.812 04:51:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:07.812 04:51:31 -- accel/accel.sh@41 -- # local IFS=, 00:11:07.812 04:51:31 -- accel/accel.sh@42 -- # jq -r . 00:11:08.072 [2024-11-18 04:51:31.362417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:08.072 [2024-11-18 04:51:31.362542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64897 ] 00:11:08.072 [2024-11-18 04:51:31.517746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.331 [2024-11-18 04:51:31.686676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.331 [2024-11-18 04:51:31.686977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.331 [2024-11-18 04:51:31.686986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.331 [2024-11-18 04:51:31.686846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val= 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val= 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val= 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val=0xf 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val= 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val= 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val=decompress 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val= 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val=software 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@23 -- # accel_module=software 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val=32 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val=32 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val=1 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val=Yes 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val= 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:08.591 04:51:31 -- accel/accel.sh@21 -- # val= 00:11:08.591 04:51:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # IFS=: 00:11:08.591 04:51:31 -- accel/accel.sh@20 -- # read -r var val 00:11:10.498 04:51:33 -- accel/accel.sh@21 -- # val= 00:11:10.498 04:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # IFS=: 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # read -r var val 00:11:10.498 04:51:33 -- accel/accel.sh@21 -- # val= 00:11:10.498 04:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # IFS=: 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # read -r var val 00:11:10.498 04:51:33 -- accel/accel.sh@21 -- # val= 00:11:10.498 04:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # IFS=: 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # read -r var val 00:11:10.498 04:51:33 -- accel/accel.sh@21 -- # val= 00:11:10.498 04:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # IFS=: 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # read -r var val 00:11:10.498 04:51:33 -- accel/accel.sh@21 -- # val= 00:11:10.498 04:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # IFS=: 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # read -r var val 00:11:10.498 04:51:33 -- accel/accel.sh@21 -- # val= 00:11:10.498 04:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # IFS=: 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # read -r var val 00:11:10.498 04:51:33 -- accel/accel.sh@21 -- # val= 00:11:10.498 04:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # IFS=: 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # read -r var val 00:11:10.498 04:51:33 -- accel/accel.sh@21 -- # val= 00:11:10.498 04:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # IFS=: 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # read -r var val 00:11:10.498 04:51:33 -- accel/accel.sh@21 -- # val= 00:11:10.498 04:51:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # IFS=: 00:11:10.498 04:51:33 -- accel/accel.sh@20 -- # read -r var val 00:11:10.498 04:51:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:10.498 04:51:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:10.498 04:51:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:10.498 00:11:10.498 real 0m4.807s 00:11:10.498 user 0m14.244s 00:11:10.498 sys 0m0.361s 00:11:10.498 04:51:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.498 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:11:10.498 ************************************ 00:11:10.498 END TEST accel_decomp_full_mcore 00:11:10.498 ************************************ 00:11:10.498 04:51:33 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.498 04:51:33 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:10.498 04:51:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.498 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:11:10.498 ************************************ 00:11:10.498 START TEST accel_decomp_mthread 00:11:10.498 ************************************ 00:11:10.498 04:51:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.498 04:51:33 -- accel/accel.sh@16 -- # local accel_opc 00:11:10.498 04:51:33 -- accel/accel.sh@17 -- # local accel_module 00:11:10.498 04:51:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.498 04:51:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.498 04:51:33 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.498 04:51:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.498 04:51:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.498 04:51:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.498 04:51:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.498 04:51:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.498 04:51:33 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.498 04:51:33 -- accel/accel.sh@42 -- # jq -r . 00:11:10.498 [2024-11-18 04:51:33.812220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:10.498 [2024-11-18 04:51:33.812517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64941 ] 00:11:10.498 [2024-11-18 04:51:33.969151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.758 [2024-11-18 04:51:34.128997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.662 04:51:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:12.663 00:11:12.663 SPDK Configuration: 00:11:12.663 Core mask: 0x1 00:11:12.663 00:11:12.663 Accel Perf Configuration: 00:11:12.663 Workload Type: decompress 00:11:12.663 Transfer size: 4096 bytes 00:11:12.663 Vector count 1 00:11:12.663 Module: software 00:11:12.663 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:12.663 Queue depth: 32 00:11:12.663 Allocate depth: 32 00:11:12.663 # threads/core: 2 00:11:12.663 Run time: 1 seconds 00:11:12.663 Verify: Yes 00:11:12.663 00:11:12.663 Running for 1 seconds... 00:11:12.663 00:11:12.663 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:12.663 ------------------------------------------------------------------------------------ 00:11:12.663 0,1 31968/s 58 MiB/s 0 0 00:11:12.663 0,0 31872/s 58 MiB/s 0 0 00:11:12.663 ==================================================================================== 00:11:12.663 Total 63840/s 249 MiB/s 0 0' 00:11:12.663 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:12.663 04:51:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:12.663 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:12.663 04:51:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:12.663 04:51:36 -- accel/accel.sh@12 -- # build_accel_config 00:11:12.663 04:51:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:12.663 04:51:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:12.663 04:51:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:12.663 04:51:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:12.663 04:51:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:12.663 04:51:36 -- accel/accel.sh@41 -- # local IFS=, 00:11:12.663 04:51:36 -- accel/accel.sh@42 -- # jq -r . 00:11:12.663 [2024-11-18 04:51:36.134287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:12.663 [2024-11-18 04:51:36.134450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65090 ] 00:11:12.922 [2024-11-18 04:51:36.303374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.184 [2024-11-18 04:51:36.463691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val= 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val= 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val= 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val=0x1 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val= 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val= 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val=decompress 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val= 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val=software 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@23 -- # accel_module=software 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val=32 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val=32 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val=2 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val=Yes 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val= 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:13.184 04:51:36 -- accel/accel.sh@21 -- # val= 00:11:13.184 04:51:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # IFS=: 00:11:13.184 04:51:36 -- accel/accel.sh@20 -- # read -r var val 00:11:15.084 04:51:38 -- accel/accel.sh@21 -- # val= 00:11:15.084 04:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # IFS=: 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # read -r var val 00:11:15.084 04:51:38 -- accel/accel.sh@21 -- # val= 00:11:15.084 04:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # IFS=: 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # read -r var val 00:11:15.084 04:51:38 -- accel/accel.sh@21 -- # val= 00:11:15.084 04:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # IFS=: 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # read -r var val 00:11:15.084 04:51:38 -- accel/accel.sh@21 -- # val= 00:11:15.084 04:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # IFS=: 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # read -r var val 00:11:15.084 04:51:38 -- accel/accel.sh@21 -- # val= 00:11:15.084 04:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # IFS=: 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # read -r var val 00:11:15.084 04:51:38 -- accel/accel.sh@21 -- # val= 00:11:15.084 04:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # IFS=: 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # read -r var val 00:11:15.084 04:51:38 -- accel/accel.sh@21 -- # val= 00:11:15.084 04:51:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # IFS=: 00:11:15.084 04:51:38 -- accel/accel.sh@20 -- # read -r var val 00:11:15.084 04:51:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:15.084 04:51:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:15.084 04:51:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:15.084 00:11:15.084 real 0m4.625s 00:11:15.084 user 0m4.112s 00:11:15.084 sys 0m0.326s 00:11:15.084 04:51:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:15.084 04:51:38 -- common/autotest_common.sh@10 -- # set +x 00:11:15.084 ************************************ 00:11:15.084 END TEST accel_decomp_mthread 00:11:15.084 ************************************ 00:11:15.084 04:51:38 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:15.084 04:51:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:15.084 04:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:15.084 04:51:38 -- common/autotest_common.sh@10 -- # set +x 00:11:15.084 ************************************ 00:11:15.084 START TEST accel_deomp_full_mthread 00:11:15.084 ************************************ 00:11:15.084 04:51:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:15.084 04:51:38 -- accel/accel.sh@16 -- # local accel_opc 00:11:15.084 04:51:38 -- accel/accel.sh@17 -- # local accel_module 00:11:15.084 04:51:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:15.084 04:51:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:15.084 04:51:38 -- accel/accel.sh@12 -- # build_accel_config 00:11:15.084 04:51:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.084 04:51:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.084 04:51:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.084 04:51:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.084 04:51:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.084 04:51:38 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.084 04:51:38 -- accel/accel.sh@42 -- # jq -r . 00:11:15.084 [2024-11-18 04:51:38.494446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:15.084 [2024-11-18 04:51:38.494618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65131 ] 00:11:15.342 [2024-11-18 04:51:38.664950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.342 [2024-11-18 04:51:38.834325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.877 04:51:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:17.877 00:11:17.877 SPDK Configuration: 00:11:17.877 Core mask: 0x1 00:11:17.877 00:11:17.877 Accel Perf Configuration: 00:11:17.877 Workload Type: decompress 00:11:17.877 Transfer size: 111250 bytes 00:11:17.877 Vector count 1 00:11:17.877 Module: software 00:11:17.877 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:17.877 Queue depth: 32 00:11:17.877 Allocate depth: 32 00:11:17.877 # threads/core: 2 00:11:17.877 Run time: 1 seconds 00:11:17.877 Verify: Yes 00:11:17.877 00:11:17.877 Running for 1 seconds... 00:11:17.877 00:11:17.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:17.877 ------------------------------------------------------------------------------------ 00:11:17.877 0,1 2496/s 103 MiB/s 0 0 00:11:17.877 0,0 2464/s 101 MiB/s 0 0 00:11:17.877 ==================================================================================== 00:11:17.877 Total 4960/s 526 MiB/s 0 0' 00:11:17.877 04:51:40 -- accel/accel.sh@20 -- # IFS=: 00:11:17.877 04:51:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:17.877 04:51:40 -- accel/accel.sh@20 -- # read -r var val 00:11:17.877 04:51:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:17.877 04:51:40 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.877 04:51:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.877 04:51:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.877 04:51:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.877 04:51:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.877 04:51:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.877 04:51:40 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.877 04:51:40 -- accel/accel.sh@42 -- # jq -r . 00:11:17.877 [2024-11-18 04:51:40.926665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:17.877 [2024-11-18 04:51:40.927006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65157 ] 00:11:17.877 [2024-11-18 04:51:41.104181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.877 [2024-11-18 04:51:41.264707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val= 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val= 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val= 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val=0x1 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val= 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val= 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val=decompress 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val= 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val=software 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@23 -- # accel_module=software 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val=32 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val=32 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val=2 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.136 04:51:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:18.136 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.136 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.137 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.137 04:51:41 -- accel/accel.sh@21 -- # val=Yes 00:11:18.137 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.137 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.137 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.137 04:51:41 -- accel/accel.sh@21 -- # val= 00:11:18.137 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.137 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.137 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:18.137 04:51:41 -- accel/accel.sh@21 -- # val= 00:11:18.137 04:51:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.137 04:51:41 -- accel/accel.sh@20 -- # IFS=: 00:11:18.137 04:51:41 -- accel/accel.sh@20 -- # read -r var val 00:11:20.042 04:51:43 -- accel/accel.sh@21 -- # val= 00:11:20.042 04:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # IFS=: 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # read -r var val 00:11:20.042 04:51:43 -- accel/accel.sh@21 -- # val= 00:11:20.042 04:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # IFS=: 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # read -r var val 00:11:20.042 04:51:43 -- accel/accel.sh@21 -- # val= 00:11:20.042 04:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # IFS=: 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # read -r var val 00:11:20.042 04:51:43 -- accel/accel.sh@21 -- # val= 00:11:20.042 04:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # IFS=: 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # read -r var val 00:11:20.042 04:51:43 -- accel/accel.sh@21 -- # val= 00:11:20.042 04:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # IFS=: 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # read -r var val 00:11:20.042 04:51:43 -- accel/accel.sh@21 -- # val= 00:11:20.042 04:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # IFS=: 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # read -r var val 00:11:20.042 04:51:43 -- accel/accel.sh@21 -- # val= 00:11:20.042 04:51:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # IFS=: 00:11:20.042 04:51:43 -- accel/accel.sh@20 -- # read -r var val 00:11:20.042 ************************************ 00:11:20.042 END TEST accel_deomp_full_mthread 00:11:20.042 ************************************ 00:11:20.042 04:51:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:20.042 04:51:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:20.042 04:51:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:20.042 00:11:20.042 real 0m4.778s 00:11:20.042 user 0m4.268s 00:11:20.042 sys 0m0.328s 00:11:20.042 04:51:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:20.042 04:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:20.042 04:51:43 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:20.042 04:51:43 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:20.042 04:51:43 -- accel/accel.sh@129 -- # build_accel_config 00:11:20.042 04:51:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:20.042 04:51:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:20.042 04:51:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:20.042 04:51:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:20.042 04:51:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:20.042 04:51:43 -- common/autotest_common.sh@10 -- # set +x 00:11:20.042 04:51:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:20.042 04:51:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:20.042 04:51:43 -- accel/accel.sh@41 -- # local IFS=, 00:11:20.042 04:51:43 -- accel/accel.sh@42 -- # jq -r . 00:11:20.042 ************************************ 00:11:20.042 START TEST accel_dif_functional_tests 00:11:20.042 ************************************ 00:11:20.042 04:51:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:20.042 [2024-11-18 04:51:43.357476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:20.042 [2024-11-18 04:51:43.357656] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65205 ] 00:11:20.042 [2024-11-18 04:51:43.529761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:20.301 [2024-11-18 04:51:43.697745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.301 [2024-11-18 04:51:43.697862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.301 [2024-11-18 04:51:43.697876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.560 00:11:20.560 00:11:20.560 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.560 http://cunit.sourceforge.net/ 00:11:20.560 00:11:20.560 00:11:20.560 Suite: accel_dif 00:11:20.560 Test: verify: DIF generated, GUARD check ...passed 00:11:20.560 Test: verify: DIF generated, APPTAG check ...passed 00:11:20.560 Test: verify: DIF generated, REFTAG check ...passed 00:11:20.560 Test: verify: DIF not generated, GUARD check ...[2024-11-18 04:51:43.955518] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:20.560 passed 00:11:20.560 Test: verify: DIF not generated, APPTAG check ...[2024-11-18 04:51:43.955624] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:20.560 [2024-11-18 04:51:43.955688] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:20.560 passed 00:11:20.560 Test: verify: DIF not generated, REFTAG check ...[2024-11-18 04:51:43.955742] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:20.560 [2024-11-18 04:51:43.955795] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:20.560 passed 00:11:20.560 Test: verify: APPTAG correct, APPTAG check ...[2024-11-18 04:51:43.956363] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:20.560 passed 00:11:20.560 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:11:20.560 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:20.560 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:20.560 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-11-18 04:51:43.956516] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:20.560 passed 00:11:20.560 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-18 04:51:43.956886] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:20.560 passed 00:11:20.560 Test: generate copy: DIF generated, GUARD check ...passed 00:11:20.560 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:20.560 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:20.560 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:20.560 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:20.560 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:20.560 Test: generate copy: iovecs-len validate ...[2024-11-18 04:51:43.957512] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:20.560 passed 00:11:20.560 Test: generate copy: buffer alignment validate ...passed 00:11:20.560 00:11:20.560 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.560 suites 1 1 n/a 0 0 00:11:20.560 tests 20 20 20 0 0 00:11:20.560 asserts 204 204 204 0 n/a 00:11:20.561 00:11:20.561 Elapsed time = 0.005 seconds 00:11:21.497 00:11:21.497 real 0m1.688s 00:11:21.497 user 0m3.167s 00:11:21.497 sys 0m0.221s 00:11:21.497 04:51:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:21.497 04:51:44 -- common/autotest_common.sh@10 -- # set +x 00:11:21.497 ************************************ 00:11:21.497 END TEST accel_dif_functional_tests 00:11:21.497 ************************************ 00:11:21.497 00:11:21.497 real 1m43.624s 00:11:21.497 user 1m53.315s 00:11:21.497 sys 0m8.434s 00:11:21.497 04:51:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:21.497 04:51:45 -- common/autotest_common.sh@10 -- # set +x 00:11:21.497 ************************************ 00:11:21.497 END TEST accel 00:11:21.497 ************************************ 00:11:21.756 04:51:45 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:21.756 04:51:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:21.756 04:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.756 04:51:45 -- common/autotest_common.sh@10 -- # set +x 00:11:21.756 ************************************ 00:11:21.756 START TEST accel_rpc 00:11:21.756 ************************************ 00:11:21.756 04:51:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:21.756 * Looking for test storage... 00:11:21.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:21.756 04:51:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:21.756 04:51:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:21.756 04:51:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:21.756 04:51:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:21.756 04:51:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:21.756 04:51:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:21.756 04:51:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:21.756 04:51:45 -- scripts/common.sh@335 -- # IFS=.-: 00:11:21.756 04:51:45 -- scripts/common.sh@335 -- # read -ra ver1 00:11:21.756 04:51:45 -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.756 04:51:45 -- scripts/common.sh@336 -- # read -ra ver2 00:11:21.756 04:51:45 -- scripts/common.sh@337 -- # local 'op=<' 00:11:21.756 04:51:45 -- scripts/common.sh@339 -- # ver1_l=2 00:11:21.756 04:51:45 -- scripts/common.sh@340 -- # ver2_l=1 00:11:21.756 04:51:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:21.756 04:51:45 -- scripts/common.sh@343 -- # case "$op" in 00:11:21.756 04:51:45 -- scripts/common.sh@344 -- # : 1 00:11:21.756 04:51:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:21.756 04:51:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.756 04:51:45 -- scripts/common.sh@364 -- # decimal 1 00:11:21.756 04:51:45 -- scripts/common.sh@352 -- # local d=1 00:11:21.756 04:51:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.756 04:51:45 -- scripts/common.sh@354 -- # echo 1 00:11:21.756 04:51:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:21.756 04:51:45 -- scripts/common.sh@365 -- # decimal 2 00:11:21.756 04:51:45 -- scripts/common.sh@352 -- # local d=2 00:11:21.756 04:51:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.756 04:51:45 -- scripts/common.sh@354 -- # echo 2 00:11:21.756 04:51:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:21.756 04:51:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:21.756 04:51:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:21.756 04:51:45 -- scripts/common.sh@367 -- # return 0 00:11:21.756 04:51:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.756 04:51:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:21.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.756 --rc genhtml_branch_coverage=1 00:11:21.756 --rc genhtml_function_coverage=1 00:11:21.756 --rc genhtml_legend=1 00:11:21.756 --rc geninfo_all_blocks=1 00:11:21.756 --rc geninfo_unexecuted_blocks=1 00:11:21.756 00:11:21.756 ' 00:11:21.756 04:51:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:21.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.756 --rc genhtml_branch_coverage=1 00:11:21.756 --rc genhtml_function_coverage=1 00:11:21.756 --rc genhtml_legend=1 00:11:21.756 --rc geninfo_all_blocks=1 00:11:21.756 --rc geninfo_unexecuted_blocks=1 00:11:21.756 00:11:21.756 ' 00:11:21.756 04:51:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:21.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.756 --rc genhtml_branch_coverage=1 00:11:21.756 --rc genhtml_function_coverage=1 00:11:21.756 --rc genhtml_legend=1 00:11:21.756 --rc geninfo_all_blocks=1 00:11:21.756 --rc geninfo_unexecuted_blocks=1 00:11:21.756 00:11:21.756 ' 00:11:21.756 04:51:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:21.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.756 --rc genhtml_branch_coverage=1 00:11:21.756 --rc genhtml_function_coverage=1 00:11:21.756 --rc genhtml_legend=1 00:11:21.756 --rc geninfo_all_blocks=1 00:11:21.756 --rc geninfo_unexecuted_blocks=1 00:11:21.756 00:11:21.756 ' 00:11:21.756 04:51:45 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:21.756 04:51:45 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65294 00:11:21.756 04:51:45 -- accel/accel_rpc.sh@15 -- # waitforlisten 65294 00:11:21.756 04:51:45 -- common/autotest_common.sh@829 -- # '[' -z 65294 ']' 00:11:21.756 04:51:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.756 04:51:45 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:21.756 04:51:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.756 04:51:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.756 04:51:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.756 04:51:45 -- common/autotest_common.sh@10 -- # set +x 00:11:22.015 [2024-11-18 04:51:45.320090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:22.016 [2024-11-18 04:51:45.320296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65294 ] 00:11:22.016 [2024-11-18 04:51:45.489345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.275 [2024-11-18 04:51:45.656751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:22.275 [2024-11-18 04:51:45.656971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.843 04:51:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.843 04:51:46 -- common/autotest_common.sh@862 -- # return 0 00:11:22.843 04:51:46 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:22.843 04:51:46 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:22.843 04:51:46 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:22.843 04:51:46 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:22.843 04:51:46 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:22.843 04:51:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.843 04:51:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.843 04:51:46 -- common/autotest_common.sh@10 -- # set +x 00:11:22.843 ************************************ 00:11:22.843 START TEST accel_assign_opcode 00:11:22.843 ************************************ 00:11:22.843 04:51:46 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:11:22.843 04:51:46 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:22.843 04:51:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.843 04:51:46 -- common/autotest_common.sh@10 -- # set +x 00:11:22.843 [2024-11-18 04:51:46.237673] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:22.843 04:51:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.843 04:51:46 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:22.843 04:51:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.843 04:51:46 -- common/autotest_common.sh@10 -- # set +x 00:11:22.843 [2024-11-18 04:51:46.245622] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:22.843 04:51:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.843 04:51:46 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:22.843 04:51:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.843 04:51:46 -- common/autotest_common.sh@10 -- # set +x 00:11:23.417 04:51:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.417 04:51:46 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:23.417 04:51:46 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:23.417 04:51:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.417 04:51:46 -- accel/accel_rpc.sh@42 -- # grep software 00:11:23.417 04:51:46 -- common/autotest_common.sh@10 -- # set +x 00:11:23.417 04:51:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.417 software 00:11:23.417 00:11:23.417 real 0m0.632s 00:11:23.417 user 0m0.015s 00:11:23.417 sys 0m0.009s 00:11:23.417 04:51:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.417 04:51:46 -- common/autotest_common.sh@10 -- # set +x 00:11:23.417 ************************************ 00:11:23.417 END TEST accel_assign_opcode 00:11:23.417 ************************************ 00:11:23.417 04:51:46 -- accel/accel_rpc.sh@55 -- # killprocess 65294 00:11:23.417 04:51:46 -- common/autotest_common.sh@936 -- # '[' -z 65294 ']' 00:11:23.417 04:51:46 -- common/autotest_common.sh@940 -- # kill -0 65294 00:11:23.417 04:51:46 -- common/autotest_common.sh@941 -- # uname 00:11:23.417 04:51:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:23.417 04:51:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65294 00:11:23.709 04:51:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:23.709 04:51:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:23.709 killing process with pid 65294 00:11:23.709 04:51:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65294' 00:11:23.709 04:51:46 -- common/autotest_common.sh@955 -- # kill 65294 00:11:23.709 04:51:46 -- common/autotest_common.sh@960 -- # wait 65294 00:11:25.652 00:11:25.652 real 0m3.774s 00:11:25.652 user 0m3.723s 00:11:25.652 sys 0m0.529s 00:11:25.652 04:51:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:25.652 04:51:48 -- common/autotest_common.sh@10 -- # set +x 00:11:25.652 ************************************ 00:11:25.652 END TEST accel_rpc 00:11:25.652 ************************************ 00:11:25.652 04:51:48 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:25.652 04:51:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:25.652 04:51:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.652 04:51:48 -- common/autotest_common.sh@10 -- # set +x 00:11:25.652 ************************************ 00:11:25.652 START TEST app_cmdline 00:11:25.652 ************************************ 00:11:25.652 04:51:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:25.652 * Looking for test storage... 00:11:25.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:25.652 04:51:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:25.652 04:51:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:25.653 04:51:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:25.653 04:51:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:25.653 04:51:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:25.653 04:51:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:25.653 04:51:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:25.653 04:51:49 -- scripts/common.sh@335 -- # IFS=.-: 00:11:25.653 04:51:49 -- scripts/common.sh@335 -- # read -ra ver1 00:11:25.653 04:51:49 -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.653 04:51:49 -- scripts/common.sh@336 -- # read -ra ver2 00:11:25.653 04:51:49 -- scripts/common.sh@337 -- # local 'op=<' 00:11:25.653 04:51:49 -- scripts/common.sh@339 -- # ver1_l=2 00:11:25.653 04:51:49 -- scripts/common.sh@340 -- # ver2_l=1 00:11:25.653 04:51:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:25.653 04:51:49 -- scripts/common.sh@343 -- # case "$op" in 00:11:25.653 04:51:49 -- scripts/common.sh@344 -- # : 1 00:11:25.653 04:51:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:25.653 04:51:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.653 04:51:49 -- scripts/common.sh@364 -- # decimal 1 00:11:25.653 04:51:49 -- scripts/common.sh@352 -- # local d=1 00:11:25.653 04:51:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.653 04:51:49 -- scripts/common.sh@354 -- # echo 1 00:11:25.653 04:51:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:25.653 04:51:49 -- scripts/common.sh@365 -- # decimal 2 00:11:25.653 04:51:49 -- scripts/common.sh@352 -- # local d=2 00:11:25.653 04:51:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.653 04:51:49 -- scripts/common.sh@354 -- # echo 2 00:11:25.653 04:51:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:25.653 04:51:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:25.653 04:51:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:25.653 04:51:49 -- scripts/common.sh@367 -- # return 0 00:11:25.653 04:51:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.653 04:51:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:25.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.653 --rc genhtml_branch_coverage=1 00:11:25.653 --rc genhtml_function_coverage=1 00:11:25.653 --rc genhtml_legend=1 00:11:25.653 --rc geninfo_all_blocks=1 00:11:25.653 --rc geninfo_unexecuted_blocks=1 00:11:25.653 00:11:25.653 ' 00:11:25.653 04:51:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:25.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.653 --rc genhtml_branch_coverage=1 00:11:25.653 --rc genhtml_function_coverage=1 00:11:25.653 --rc genhtml_legend=1 00:11:25.653 --rc geninfo_all_blocks=1 00:11:25.653 --rc geninfo_unexecuted_blocks=1 00:11:25.653 00:11:25.653 ' 00:11:25.653 04:51:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:25.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.653 --rc genhtml_branch_coverage=1 00:11:25.653 --rc genhtml_function_coverage=1 00:11:25.653 --rc genhtml_legend=1 00:11:25.653 --rc geninfo_all_blocks=1 00:11:25.653 --rc geninfo_unexecuted_blocks=1 00:11:25.653 00:11:25.653 ' 00:11:25.653 04:51:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:25.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.653 --rc genhtml_branch_coverage=1 00:11:25.653 --rc genhtml_function_coverage=1 00:11:25.653 --rc genhtml_legend=1 00:11:25.653 --rc geninfo_all_blocks=1 00:11:25.653 --rc geninfo_unexecuted_blocks=1 00:11:25.653 00:11:25.653 ' 00:11:25.653 04:51:49 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:25.653 04:51:49 -- app/cmdline.sh@17 -- # spdk_tgt_pid=65410 00:11:25.653 04:51:49 -- app/cmdline.sh@18 -- # waitforlisten 65410 00:11:25.653 04:51:49 -- common/autotest_common.sh@829 -- # '[' -z 65410 ']' 00:11:25.653 04:51:49 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:25.653 04:51:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.653 04:51:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.653 04:51:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.653 04:51:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.653 04:51:49 -- common/autotest_common.sh@10 -- # set +x 00:11:25.653 [2024-11-18 04:51:49.142481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:25.653 [2024-11-18 04:51:49.142668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65410 ] 00:11:25.912 [2024-11-18 04:51:49.316312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.186 [2024-11-18 04:51:49.540980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:26.186 [2024-11-18 04:51:49.541215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.566 04:51:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.566 04:51:50 -- common/autotest_common.sh@862 -- # return 0 00:11:27.566 04:51:50 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:27.566 { 00:11:27.566 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:11:27.566 "fields": { 00:11:27.566 "major": 24, 00:11:27.566 "minor": 1, 00:11:27.566 "patch": 1, 00:11:27.566 "suffix": "-pre", 00:11:27.566 "commit": "c13c99a5e" 00:11:27.566 } 00:11:27.567 } 00:11:27.567 04:51:51 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:27.567 04:51:51 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:27.567 04:51:51 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:27.567 04:51:51 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:27.567 04:51:51 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:27.567 04:51:51 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:27.567 04:51:51 -- app/cmdline.sh@26 -- # sort 00:11:27.567 04:51:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.826 04:51:51 -- common/autotest_common.sh@10 -- # set +x 00:11:27.826 04:51:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.826 04:51:51 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:27.826 04:51:51 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:27.826 04:51:51 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:27.826 04:51:51 -- common/autotest_common.sh@650 -- # local es=0 00:11:27.826 04:51:51 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:27.826 04:51:51 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.826 04:51:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.826 04:51:51 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.826 04:51:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.826 04:51:51 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.826 04:51:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.826 04:51:51 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.826 04:51:51 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:27.826 04:51:51 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:28.086 request: 00:11:28.086 { 00:11:28.086 "method": "env_dpdk_get_mem_stats", 00:11:28.086 "req_id": 1 00:11:28.086 } 00:11:28.086 Got JSON-RPC error response 00:11:28.086 response: 00:11:28.086 { 00:11:28.086 "code": -32601, 00:11:28.086 "message": "Method not found" 00:11:28.086 } 00:11:28.086 04:51:51 -- common/autotest_common.sh@653 -- # es=1 00:11:28.086 04:51:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:28.086 04:51:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:28.086 04:51:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:28.086 04:51:51 -- app/cmdline.sh@1 -- # killprocess 65410 00:11:28.086 04:51:51 -- common/autotest_common.sh@936 -- # '[' -z 65410 ']' 00:11:28.086 04:51:51 -- common/autotest_common.sh@940 -- # kill -0 65410 00:11:28.086 04:51:51 -- common/autotest_common.sh@941 -- # uname 00:11:28.086 04:51:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.086 04:51:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65410 00:11:28.086 killing process with pid 65410 00:11:28.086 04:51:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:28.086 04:51:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:28.086 04:51:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65410' 00:11:28.086 04:51:51 -- common/autotest_common.sh@955 -- # kill 65410 00:11:28.086 04:51:51 -- common/autotest_common.sh@960 -- # wait 65410 00:11:29.992 00:11:29.992 real 0m4.409s 00:11:29.992 user 0m5.072s 00:11:29.992 sys 0m0.556s 00:11:29.992 ************************************ 00:11:29.992 END TEST app_cmdline 00:11:29.992 ************************************ 00:11:29.992 04:51:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:29.992 04:51:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.992 04:51:53 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:29.992 04:51:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:29.992 04:51:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.992 04:51:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.992 ************************************ 00:11:29.992 START TEST version 00:11:29.992 ************************************ 00:11:29.992 04:51:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:29.992 * Looking for test storage... 00:11:29.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:29.992 04:51:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:29.992 04:51:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:29.992 04:51:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:30.250 04:51:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:30.250 04:51:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:30.250 04:51:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:30.250 04:51:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:30.250 04:51:53 -- scripts/common.sh@335 -- # IFS=.-: 00:11:30.250 04:51:53 -- scripts/common.sh@335 -- # read -ra ver1 00:11:30.250 04:51:53 -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.250 04:51:53 -- scripts/common.sh@336 -- # read -ra ver2 00:11:30.250 04:51:53 -- scripts/common.sh@337 -- # local 'op=<' 00:11:30.250 04:51:53 -- scripts/common.sh@339 -- # ver1_l=2 00:11:30.250 04:51:53 -- scripts/common.sh@340 -- # ver2_l=1 00:11:30.250 04:51:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:30.250 04:51:53 -- scripts/common.sh@343 -- # case "$op" in 00:11:30.250 04:51:53 -- scripts/common.sh@344 -- # : 1 00:11:30.250 04:51:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:30.250 04:51:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.250 04:51:53 -- scripts/common.sh@364 -- # decimal 1 00:11:30.250 04:51:53 -- scripts/common.sh@352 -- # local d=1 00:11:30.250 04:51:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.250 04:51:53 -- scripts/common.sh@354 -- # echo 1 00:11:30.250 04:51:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:30.250 04:51:53 -- scripts/common.sh@365 -- # decimal 2 00:11:30.250 04:51:53 -- scripts/common.sh@352 -- # local d=2 00:11:30.250 04:51:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.250 04:51:53 -- scripts/common.sh@354 -- # echo 2 00:11:30.250 04:51:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:30.250 04:51:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:30.250 04:51:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:30.250 04:51:53 -- scripts/common.sh@367 -- # return 0 00:11:30.250 04:51:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.250 04:51:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:30.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.250 --rc genhtml_branch_coverage=1 00:11:30.250 --rc genhtml_function_coverage=1 00:11:30.250 --rc genhtml_legend=1 00:11:30.250 --rc geninfo_all_blocks=1 00:11:30.250 --rc geninfo_unexecuted_blocks=1 00:11:30.250 00:11:30.250 ' 00:11:30.250 04:51:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:30.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.250 --rc genhtml_branch_coverage=1 00:11:30.250 --rc genhtml_function_coverage=1 00:11:30.250 --rc genhtml_legend=1 00:11:30.250 --rc geninfo_all_blocks=1 00:11:30.250 --rc geninfo_unexecuted_blocks=1 00:11:30.250 00:11:30.250 ' 00:11:30.250 04:51:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:30.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.250 --rc genhtml_branch_coverage=1 00:11:30.250 --rc genhtml_function_coverage=1 00:11:30.250 --rc genhtml_legend=1 00:11:30.250 --rc geninfo_all_blocks=1 00:11:30.250 --rc geninfo_unexecuted_blocks=1 00:11:30.250 00:11:30.250 ' 00:11:30.250 04:51:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:30.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.250 --rc genhtml_branch_coverage=1 00:11:30.250 --rc genhtml_function_coverage=1 00:11:30.250 --rc genhtml_legend=1 00:11:30.251 --rc geninfo_all_blocks=1 00:11:30.251 --rc geninfo_unexecuted_blocks=1 00:11:30.251 00:11:30.251 ' 00:11:30.251 04:51:53 -- app/version.sh@17 -- # get_header_version major 00:11:30.251 04:51:53 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:30.251 04:51:53 -- app/version.sh@14 -- # cut -f2 00:11:30.251 04:51:53 -- app/version.sh@14 -- # tr -d '"' 00:11:30.251 04:51:53 -- app/version.sh@17 -- # major=24 00:11:30.251 04:51:53 -- app/version.sh@18 -- # get_header_version minor 00:11:30.251 04:51:53 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:30.251 04:51:53 -- app/version.sh@14 -- # cut -f2 00:11:30.251 04:51:53 -- app/version.sh@14 -- # tr -d '"' 00:11:30.251 04:51:53 -- app/version.sh@18 -- # minor=1 00:11:30.251 04:51:53 -- app/version.sh@19 -- # get_header_version patch 00:11:30.251 04:51:53 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:30.251 04:51:53 -- app/version.sh@14 -- # cut -f2 00:11:30.251 04:51:53 -- app/version.sh@14 -- # tr -d '"' 00:11:30.251 04:51:53 -- app/version.sh@19 -- # patch=1 00:11:30.251 04:51:53 -- app/version.sh@20 -- # get_header_version suffix 00:11:30.251 04:51:53 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:30.251 04:51:53 -- app/version.sh@14 -- # cut -f2 00:11:30.251 04:51:53 -- app/version.sh@14 -- # tr -d '"' 00:11:30.251 04:51:53 -- app/version.sh@20 -- # suffix=-pre 00:11:30.251 04:51:53 -- app/version.sh@22 -- # version=24.1 00:11:30.251 04:51:53 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:30.251 04:51:53 -- app/version.sh@25 -- # version=24.1.1 00:11:30.251 04:51:53 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:30.251 04:51:53 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:30.251 04:51:53 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:30.251 04:51:53 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:30.251 04:51:53 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:30.251 00:11:30.251 real 0m0.238s 00:11:30.251 user 0m0.158s 00:11:30.251 sys 0m0.122s 00:11:30.251 ************************************ 00:11:30.251 END TEST version 00:11:30.251 ************************************ 00:11:30.251 04:51:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.251 04:51:53 -- common/autotest_common.sh@10 -- # set +x 00:11:30.251 04:51:53 -- spdk/autotest.sh@181 -- # '[' 1 -eq 1 ']' 00:11:30.251 04:51:53 -- spdk/autotest.sh@182 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:30.251 04:51:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:30.251 04:51:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.251 04:51:53 -- common/autotest_common.sh@10 -- # set +x 00:11:30.251 ************************************ 00:11:30.251 START TEST blockdev_general 00:11:30.251 ************************************ 00:11:30.251 04:51:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:30.251 * Looking for test storage... 00:11:30.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:30.251 04:51:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:30.251 04:51:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:30.251 04:51:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:30.510 04:51:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:30.510 04:51:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:30.510 04:51:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:30.510 04:51:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:30.510 04:51:53 -- scripts/common.sh@335 -- # IFS=.-: 00:11:30.510 04:51:53 -- scripts/common.sh@335 -- # read -ra ver1 00:11:30.510 04:51:53 -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.510 04:51:53 -- scripts/common.sh@336 -- # read -ra ver2 00:11:30.510 04:51:53 -- scripts/common.sh@337 -- # local 'op=<' 00:11:30.510 04:51:53 -- scripts/common.sh@339 -- # ver1_l=2 00:11:30.510 04:51:53 -- scripts/common.sh@340 -- # ver2_l=1 00:11:30.510 04:51:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:30.510 04:51:53 -- scripts/common.sh@343 -- # case "$op" in 00:11:30.510 04:51:53 -- scripts/common.sh@344 -- # : 1 00:11:30.510 04:51:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:30.510 04:51:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.510 04:51:53 -- scripts/common.sh@364 -- # decimal 1 00:11:30.510 04:51:53 -- scripts/common.sh@352 -- # local d=1 00:11:30.510 04:51:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.510 04:51:53 -- scripts/common.sh@354 -- # echo 1 00:11:30.510 04:51:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:30.510 04:51:53 -- scripts/common.sh@365 -- # decimal 2 00:11:30.510 04:51:53 -- scripts/common.sh@352 -- # local d=2 00:11:30.510 04:51:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.510 04:51:53 -- scripts/common.sh@354 -- # echo 2 00:11:30.510 04:51:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:30.510 04:51:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:30.510 04:51:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:30.510 04:51:53 -- scripts/common.sh@367 -- # return 0 00:11:30.510 04:51:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.510 04:51:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:30.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.510 --rc genhtml_branch_coverage=1 00:11:30.510 --rc genhtml_function_coverage=1 00:11:30.510 --rc genhtml_legend=1 00:11:30.510 --rc geninfo_all_blocks=1 00:11:30.510 --rc geninfo_unexecuted_blocks=1 00:11:30.510 00:11:30.510 ' 00:11:30.510 04:51:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:30.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.510 --rc genhtml_branch_coverage=1 00:11:30.510 --rc genhtml_function_coverage=1 00:11:30.510 --rc genhtml_legend=1 00:11:30.510 --rc geninfo_all_blocks=1 00:11:30.510 --rc geninfo_unexecuted_blocks=1 00:11:30.510 00:11:30.510 ' 00:11:30.510 04:51:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:30.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.510 --rc genhtml_branch_coverage=1 00:11:30.510 --rc genhtml_function_coverage=1 00:11:30.510 --rc genhtml_legend=1 00:11:30.510 --rc geninfo_all_blocks=1 00:11:30.510 --rc geninfo_unexecuted_blocks=1 00:11:30.510 00:11:30.510 ' 00:11:30.510 04:51:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:30.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.510 --rc genhtml_branch_coverage=1 00:11:30.510 --rc genhtml_function_coverage=1 00:11:30.510 --rc genhtml_legend=1 00:11:30.510 --rc geninfo_all_blocks=1 00:11:30.510 --rc geninfo_unexecuted_blocks=1 00:11:30.510 00:11:30.510 ' 00:11:30.510 04:51:53 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:30.510 04:51:53 -- bdev/nbd_common.sh@6 -- # set -e 00:11:30.510 04:51:53 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:30.510 04:51:53 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:30.510 04:51:53 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:30.510 04:51:53 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:30.510 04:51:53 -- bdev/blockdev.sh@18 -- # : 00:11:30.510 04:51:53 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:30.510 04:51:53 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:30.510 04:51:53 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:30.510 04:51:53 -- bdev/blockdev.sh@672 -- # uname -s 00:11:30.510 04:51:53 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:30.510 04:51:53 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:30.510 04:51:53 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:11:30.510 04:51:53 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:30.510 04:51:53 -- bdev/blockdev.sh@682 -- # dek= 00:11:30.510 04:51:53 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:30.510 04:51:53 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:30.510 04:51:53 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:30.511 04:51:53 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:11:30.511 04:51:53 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:11:30.511 04:51:53 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:30.511 04:51:53 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=65598 00:11:30.511 04:51:53 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:30.511 04:51:53 -- bdev/blockdev.sh@47 -- # waitforlisten 65598 00:11:30.511 04:51:53 -- common/autotest_common.sh@829 -- # '[' -z 65598 ']' 00:11:30.511 04:51:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.511 04:51:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:30.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.511 04:51:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.511 04:51:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:30.511 04:51:53 -- common/autotest_common.sh@10 -- # set +x 00:11:30.511 04:51:53 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:30.511 [2024-11-18 04:51:53.888825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:30.511 [2024-11-18 04:51:53.888991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65598 ] 00:11:30.769 [2024-11-18 04:51:54.056406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.769 [2024-11-18 04:51:54.239834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:30.769 [2024-11-18 04:51:54.240148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.337 04:51:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.337 04:51:54 -- common/autotest_common.sh@862 -- # return 0 00:11:31.337 04:51:54 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:31.337 04:51:54 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:11:31.337 04:51:54 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:11:31.337 04:51:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.337 04:51:54 -- common/autotest_common.sh@10 -- # set +x 00:11:32.275 [2024-11-18 04:51:55.534826] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:32.275 [2024-11-18 04:51:55.534924] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:32.275 00:11:32.275 [2024-11-18 04:51:55.542776] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:32.275 [2024-11-18 04:51:55.542855] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:32.275 00:11:32.275 Malloc0 00:11:32.275 Malloc1 00:11:32.275 Malloc2 00:11:32.275 Malloc3 00:11:32.275 Malloc4 00:11:32.275 Malloc5 00:11:32.275 Malloc6 00:11:32.534 Malloc7 00:11:32.534 Malloc8 00:11:32.534 Malloc9 00:11:32.534 [2024-11-18 04:51:55.883260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:32.534 [2024-11-18 04:51:55.883348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.534 [2024-11-18 04:51:55.883377] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:11:32.534 [2024-11-18 04:51:55.883390] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.534 [2024-11-18 04:51:55.885615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.534 [2024-11-18 04:51:55.885672] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:32.534 TestPT 00:11:32.534 04:51:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.534 04:51:55 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:32.534 5000+0 records in 00:11:32.534 5000+0 records out 00:11:32.534 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0217224 s, 471 MB/s 00:11:32.534 04:51:55 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:32.534 04:51:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.534 04:51:55 -- common/autotest_common.sh@10 -- # set +x 00:11:32.534 AIO0 00:11:32.534 04:51:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.534 04:51:55 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:32.534 04:51:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.534 04:51:55 -- common/autotest_common.sh@10 -- # set +x 00:11:32.534 04:51:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.534 04:51:55 -- bdev/blockdev.sh@738 -- # cat 00:11:32.534 04:51:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:32.534 04:51:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.534 04:51:55 -- common/autotest_common.sh@10 -- # set +x 00:11:32.534 04:51:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.534 04:51:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:32.534 04:51:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.534 04:51:56 -- common/autotest_common.sh@10 -- # set +x 00:11:32.534 04:51:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.534 04:51:56 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:32.534 04:51:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.534 04:51:56 -- common/autotest_common.sh@10 -- # set +x 00:11:32.794 04:51:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.794 04:51:56 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:32.794 04:51:56 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:32.794 04:51:56 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:32.794 04:51:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.794 04:51:56 -- common/autotest_common.sh@10 -- # set +x 00:11:32.794 04:51:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.794 04:51:56 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:32.794 04:51:56 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:32.795 04:51:56 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2c3b0bb8-fcaf-4dc4-bb7d-36ebb24938dc"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2c3b0bb8-fcaf-4dc4-bb7d-36ebb24938dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f0594ba0-8bd9-573f-85c0-4780d2a95551"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f0594ba0-8bd9-573f-85c0-4780d2a95551",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f9479ca7-ce3d-5aeb-860b-1de2ecbf4e6c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f9479ca7-ce3d-5aeb-860b-1de2ecbf4e6c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9cdde703-3a2b-5611-8f59-da00df5eaf1e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9cdde703-3a2b-5611-8f59-da00df5eaf1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "1ebb1277-c0a1-52d0-930c-93e16a22321f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1ebb1277-c0a1-52d0-930c-93e16a22321f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "0634d059-c94f-536c-9176-812de044e15c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0634d059-c94f-536c-9176-812de044e15c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "20f06192-ebd7-54d9-b889-6cda7e4bd821"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20f06192-ebd7-54d9-b889-6cda7e4bd821",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cda27556-0891-5cf6-abe5-ae61e8b72d63"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cda27556-0891-5cf6-abe5-ae61e8b72d63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "c64ed296-f326-5c3a-a315-b04d1205f634"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c64ed296-f326-5c3a-a315-b04d1205f634",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "da5e6121-b379-5e17-ab19-481eebf2ce09"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "da5e6121-b379-5e17-ab19-481eebf2ce09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c120acbd-5d09-5fba-8494-58ffd454eed2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c120acbd-5d09-5fba-8494-58ffd454eed2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "cbcf9a3c-fd98-56a9-b9b8-7796785bbca1"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cbcf9a3c-fd98-56a9-b9b8-7796785bbca1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2a17c6be-bdcb-4642-bc2f-404ffc28a628"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2a17c6be-bdcb-4642-bc2f-404ffc28a628",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a17c6be-bdcb-4642-bc2f-404ffc28a628",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "cbd61335-fa97-4443-88e1-bfbd2fd39b26",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f0bd2ef3-4b1b-4afa-9a39-b8a91612f7cd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "7c1bdec2-a7c7-400c-93bf-33103b8abc5c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7c1bdec2-a7c7-400c-93bf-33103b8abc5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7c1bdec2-a7c7-400c-93bf-33103b8abc5c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "4521ff8b-683c-499c-b0a0-3abdcc10e5d5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e7fb89f7-2a71-4946-9242-b1c9592b4cd5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "192f6de6-eae3-449e-b39f-544a873016e3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "192f6de6-eae3-449e-b39f-544a873016e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "192f6de6-eae3-449e-b39f-544a873016e3",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "1d4399d5-b6ee-44a4-9dc1-b6d5072bc3b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "8cb732f7-36f0-45d7-b5e7-ad5d92215636",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "18ea5e56-5565-4668-a896-40808c2fa663"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "18ea5e56-5565-4668-a896-40808c2fa663",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:32.796 04:51:56 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:32.796 04:51:56 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:11:32.796 04:51:56 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:32.796 04:51:56 -- bdev/blockdev.sh@752 -- # killprocess 65598 00:11:32.796 04:51:56 -- common/autotest_common.sh@936 -- # '[' -z 65598 ']' 00:11:32.796 04:51:56 -- common/autotest_common.sh@940 -- # kill -0 65598 00:11:32.796 04:51:56 -- common/autotest_common.sh@941 -- # uname 00:11:32.796 04:51:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:32.796 04:51:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65598 00:11:32.796 04:51:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:32.796 04:51:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:32.796 killing process with pid 65598 00:11:32.796 04:51:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65598' 00:11:32.796 04:51:56 -- common/autotest_common.sh@955 -- # kill 65598 00:11:32.796 04:51:56 -- common/autotest_common.sh@960 -- # wait 65598 00:11:36.083 04:51:59 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:36.083 04:51:59 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:36.083 04:51:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:36.083 04:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:36.083 04:51:59 -- common/autotest_common.sh@10 -- # set +x 00:11:36.083 ************************************ 00:11:36.083 START TEST bdev_hello_world 00:11:36.083 ************************************ 00:11:36.083 04:51:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:36.083 [2024-11-18 04:51:59.115866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:36.083 [2024-11-18 04:51:59.116595] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65677 ] 00:11:36.083 [2024-11-18 04:51:59.288720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.083 [2024-11-18 04:51:59.475128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.341 [2024-11-18 04:51:59.813966] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:36.341 [2024-11-18 04:51:59.814075] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:36.341 [2024-11-18 04:51:59.821927] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:36.341 [2024-11-18 04:51:59.822006] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:36.341 [2024-11-18 04:51:59.829955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:36.341 [2024-11-18 04:51:59.830015] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:36.341 [2024-11-18 04:51:59.830047] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:36.600 [2024-11-18 04:51:59.994903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:36.600 [2024-11-18 04:51:59.995009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.600 [2024-11-18 04:51:59.995035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:11:36.600 [2024-11-18 04:51:59.995054] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.600 [2024-11-18 04:51:59.997487] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.600 [2024-11-18 04:51:59.997546] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:36.859 [2024-11-18 04:52:00.259291] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:36.859 [2024-11-18 04:52:00.259385] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:36.859 [2024-11-18 04:52:00.259451] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:36.859 [2024-11-18 04:52:00.259523] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:36.859 [2024-11-18 04:52:00.259607] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:36.859 [2024-11-18 04:52:00.259642] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:36.859 [2024-11-18 04:52:00.259705] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:36.859 00:11:36.859 [2024-11-18 04:52:00.259740] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:38.765 00:11:38.765 real 0m3.045s 00:11:38.765 user 0m2.595s 00:11:38.765 sys 0m0.327s 00:11:38.765 04:52:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.765 04:52:02 -- common/autotest_common.sh@10 -- # set +x 00:11:38.765 ************************************ 00:11:38.765 END TEST bdev_hello_world 00:11:38.765 ************************************ 00:11:38.765 04:52:02 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:38.765 04:52:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:38.765 04:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.765 04:52:02 -- common/autotest_common.sh@10 -- # set +x 00:11:38.765 ************************************ 00:11:38.765 START TEST bdev_bounds 00:11:38.765 ************************************ 00:11:38.765 04:52:02 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:11:38.765 04:52:02 -- bdev/blockdev.sh@288 -- # bdevio_pid=65726 00:11:38.765 04:52:02 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:38.765 04:52:02 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:38.765 Process bdevio pid: 65726 00:11:38.765 04:52:02 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 65726' 00:11:38.765 04:52:02 -- bdev/blockdev.sh@291 -- # waitforlisten 65726 00:11:38.765 04:52:02 -- common/autotest_common.sh@829 -- # '[' -z 65726 ']' 00:11:38.765 04:52:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.765 04:52:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.765 04:52:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.765 04:52:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.765 04:52:02 -- common/autotest_common.sh@10 -- # set +x 00:11:38.765 [2024-11-18 04:52:02.219217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.765 [2024-11-18 04:52:02.219411] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65726 ] 00:11:39.025 [2024-11-18 04:52:02.391683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:39.284 [2024-11-18 04:52:02.571949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.284 [2024-11-18 04:52:02.572104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.284 [2024-11-18 04:52:02.572126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.543 [2024-11-18 04:52:02.903756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:39.544 [2024-11-18 04:52:02.903830] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:39.544 [2024-11-18 04:52:02.911722] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:39.544 [2024-11-18 04:52:02.911767] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:39.544 [2024-11-18 04:52:02.919754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:39.544 [2024-11-18 04:52:02.919793] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:39.544 [2024-11-18 04:52:02.919808] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:39.860 [2024-11-18 04:52:03.094995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:39.860 [2024-11-18 04:52:03.095069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.860 [2024-11-18 04:52:03.095102] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:11:39.860 [2024-11-18 04:52:03.095116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.860 [2024-11-18 04:52:03.097790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.860 [2024-11-18 04:52:03.097834] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:40.444 04:52:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.444 04:52:03 -- common/autotest_common.sh@862 -- # return 0 00:11:40.444 04:52:03 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:40.704 I/O targets: 00:11:40.704 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:40.704 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:40.704 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:40.704 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:40.704 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:40.704 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:40.704 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:40.704 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:40.704 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:40.704 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:40.704 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:40.704 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:40.704 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:40.704 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:40.704 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:40.704 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:40.704 00:11:40.704 00:11:40.704 CUnit - A unit testing framework for C - Version 2.1-3 00:11:40.704 http://cunit.sourceforge.net/ 00:11:40.704 00:11:40.704 00:11:40.704 Suite: bdevio tests on: AIO0 00:11:40.704 Test: blockdev write read block ...passed 00:11:40.704 Test: blockdev write zeroes read block ...passed 00:11:40.704 Test: blockdev write zeroes read no split ...passed 00:11:40.704 Test: blockdev write zeroes read split ...passed 00:11:40.704 Test: blockdev write zeroes read split partial ...passed 00:11:40.704 Test: blockdev reset ...passed 00:11:40.704 Test: blockdev write read 8 blocks ...passed 00:11:40.704 Test: blockdev write read size > 128k ...passed 00:11:40.704 Test: blockdev write read invalid size ...passed 00:11:40.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.704 Test: blockdev write read max offset ...passed 00:11:40.704 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.704 Test: blockdev writev readv 8 blocks ...passed 00:11:40.704 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.704 Test: blockdev writev readv block ...passed 00:11:40.704 Test: blockdev writev readv size > 128k ...passed 00:11:40.704 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.704 Test: blockdev comparev and writev ...passed 00:11:40.704 Test: blockdev nvme passthru rw ...passed 00:11:40.704 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.704 Test: blockdev nvme admin passthru ...passed 00:11:40.704 Test: blockdev copy ...passed 00:11:40.704 Suite: bdevio tests on: raid1 00:11:40.704 Test: blockdev write read block ...passed 00:11:40.704 Test: blockdev write zeroes read block ...passed 00:11:40.704 Test: blockdev write zeroes read no split ...passed 00:11:40.704 Test: blockdev write zeroes read split ...passed 00:11:40.704 Test: blockdev write zeroes read split partial ...passed 00:11:40.704 Test: blockdev reset ...passed 00:11:40.704 Test: blockdev write read 8 blocks ...passed 00:11:40.704 Test: blockdev write read size > 128k ...passed 00:11:40.704 Test: blockdev write read invalid size ...passed 00:11:40.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.704 Test: blockdev write read max offset ...passed 00:11:40.704 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.704 Test: blockdev writev readv 8 blocks ...passed 00:11:40.704 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.704 Test: blockdev writev readv block ...passed 00:11:40.704 Test: blockdev writev readv size > 128k ...passed 00:11:40.704 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.704 Test: blockdev comparev and writev ...passed 00:11:40.704 Test: blockdev nvme passthru rw ...passed 00:11:40.704 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.704 Test: blockdev nvme admin passthru ...passed 00:11:40.704 Test: blockdev copy ...passed 00:11:40.704 Suite: bdevio tests on: concat0 00:11:40.704 Test: blockdev write read block ...passed 00:11:40.704 Test: blockdev write zeroes read block ...passed 00:11:40.704 Test: blockdev write zeroes read no split ...passed 00:11:40.704 Test: blockdev write zeroes read split ...passed 00:11:40.704 Test: blockdev write zeroes read split partial ...passed 00:11:40.704 Test: blockdev reset ...passed 00:11:40.704 Test: blockdev write read 8 blocks ...passed 00:11:40.704 Test: blockdev write read size > 128k ...passed 00:11:40.704 Test: blockdev write read invalid size ...passed 00:11:40.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.704 Test: blockdev write read max offset ...passed 00:11:40.704 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.704 Test: blockdev writev readv 8 blocks ...passed 00:11:40.704 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.704 Test: blockdev writev readv block ...passed 00:11:40.704 Test: blockdev writev readv size > 128k ...passed 00:11:40.704 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.704 Test: blockdev comparev and writev ...passed 00:11:40.704 Test: blockdev nvme passthru rw ...passed 00:11:40.704 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.704 Test: blockdev nvme admin passthru ...passed 00:11:40.704 Test: blockdev copy ...passed 00:11:40.704 Suite: bdevio tests on: raid0 00:11:40.705 Test: blockdev write read block ...passed 00:11:40.705 Test: blockdev write zeroes read block ...passed 00:11:40.705 Test: blockdev write zeroes read no split ...passed 00:11:40.964 Test: blockdev write zeroes read split ...passed 00:11:40.964 Test: blockdev write zeroes read split partial ...passed 00:11:40.964 Test: blockdev reset ...passed 00:11:40.964 Test: blockdev write read 8 blocks ...passed 00:11:40.964 Test: blockdev write read size > 128k ...passed 00:11:40.964 Test: blockdev write read invalid size ...passed 00:11:40.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.964 Test: blockdev write read max offset ...passed 00:11:40.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.964 Test: blockdev writev readv 8 blocks ...passed 00:11:40.964 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.964 Test: blockdev writev readv block ...passed 00:11:40.964 Test: blockdev writev readv size > 128k ...passed 00:11:40.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.964 Test: blockdev comparev and writev ...passed 00:11:40.964 Test: blockdev nvme passthru rw ...passed 00:11:40.964 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.964 Test: blockdev nvme admin passthru ...passed 00:11:40.964 Test: blockdev copy ...passed 00:11:40.964 Suite: bdevio tests on: TestPT 00:11:40.964 Test: blockdev write read block ...passed 00:11:40.964 Test: blockdev write zeroes read block ...passed 00:11:40.964 Test: blockdev write zeroes read no split ...passed 00:11:40.964 Test: blockdev write zeroes read split ...passed 00:11:40.964 Test: blockdev write zeroes read split partial ...passed 00:11:40.964 Test: blockdev reset ...passed 00:11:40.964 Test: blockdev write read 8 blocks ...passed 00:11:40.964 Test: blockdev write read size > 128k ...passed 00:11:40.964 Test: blockdev write read invalid size ...passed 00:11:40.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.964 Test: blockdev write read max offset ...passed 00:11:40.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.964 Test: blockdev writev readv 8 blocks ...passed 00:11:40.964 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.964 Test: blockdev writev readv block ...passed 00:11:40.964 Test: blockdev writev readv size > 128k ...passed 00:11:40.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.964 Test: blockdev comparev and writev ...passed 00:11:40.964 Test: blockdev nvme passthru rw ...passed 00:11:40.964 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.964 Test: blockdev nvme admin passthru ...passed 00:11:40.964 Test: blockdev copy ...passed 00:11:40.964 Suite: bdevio tests on: Malloc2p7 00:11:40.964 Test: blockdev write read block ...passed 00:11:40.964 Test: blockdev write zeroes read block ...passed 00:11:40.964 Test: blockdev write zeroes read no split ...passed 00:11:40.964 Test: blockdev write zeroes read split ...passed 00:11:40.964 Test: blockdev write zeroes read split partial ...passed 00:11:40.964 Test: blockdev reset ...passed 00:11:40.964 Test: blockdev write read 8 blocks ...passed 00:11:40.964 Test: blockdev write read size > 128k ...passed 00:11:40.964 Test: blockdev write read invalid size ...passed 00:11:40.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.964 Test: blockdev write read max offset ...passed 00:11:40.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.964 Test: blockdev writev readv 8 blocks ...passed 00:11:40.964 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.964 Test: blockdev writev readv block ...passed 00:11:40.964 Test: blockdev writev readv size > 128k ...passed 00:11:40.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.964 Test: blockdev comparev and writev ...passed 00:11:40.964 Test: blockdev nvme passthru rw ...passed 00:11:40.964 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.964 Test: blockdev nvme admin passthru ...passed 00:11:40.964 Test: blockdev copy ...passed 00:11:40.964 Suite: bdevio tests on: Malloc2p6 00:11:40.964 Test: blockdev write read block ...passed 00:11:40.964 Test: blockdev write zeroes read block ...passed 00:11:40.964 Test: blockdev write zeroes read no split ...passed 00:11:40.964 Test: blockdev write zeroes read split ...passed 00:11:40.964 Test: blockdev write zeroes read split partial ...passed 00:11:40.964 Test: blockdev reset ...passed 00:11:40.964 Test: blockdev write read 8 blocks ...passed 00:11:40.964 Test: blockdev write read size > 128k ...passed 00:11:40.964 Test: blockdev write read invalid size ...passed 00:11:40.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.964 Test: blockdev write read max offset ...passed 00:11:40.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.964 Test: blockdev writev readv 8 blocks ...passed 00:11:40.964 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.964 Test: blockdev writev readv block ...passed 00:11:40.964 Test: blockdev writev readv size > 128k ...passed 00:11:40.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.964 Test: blockdev comparev and writev ...passed 00:11:40.964 Test: blockdev nvme passthru rw ...passed 00:11:40.964 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.964 Test: blockdev nvme admin passthru ...passed 00:11:40.964 Test: blockdev copy ...passed 00:11:40.964 Suite: bdevio tests on: Malloc2p5 00:11:40.964 Test: blockdev write read block ...passed 00:11:40.964 Test: blockdev write zeroes read block ...passed 00:11:40.964 Test: blockdev write zeroes read no split ...passed 00:11:41.224 Test: blockdev write zeroes read split ...passed 00:11:41.224 Test: blockdev write zeroes read split partial ...passed 00:11:41.224 Test: blockdev reset ...passed 00:11:41.224 Test: blockdev write read 8 blocks ...passed 00:11:41.224 Test: blockdev write read size > 128k ...passed 00:11:41.224 Test: blockdev write read invalid size ...passed 00:11:41.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.224 Test: blockdev write read max offset ...passed 00:11:41.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.224 Test: blockdev writev readv 8 blocks ...passed 00:11:41.224 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.224 Test: blockdev writev readv block ...passed 00:11:41.224 Test: blockdev writev readv size > 128k ...passed 00:11:41.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.224 Test: blockdev comparev and writev ...passed 00:11:41.224 Test: blockdev nvme passthru rw ...passed 00:11:41.224 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.224 Test: blockdev nvme admin passthru ...passed 00:11:41.224 Test: blockdev copy ...passed 00:11:41.224 Suite: bdevio tests on: Malloc2p4 00:11:41.224 Test: blockdev write read block ...passed 00:11:41.224 Test: blockdev write zeroes read block ...passed 00:11:41.224 Test: blockdev write zeroes read no split ...passed 00:11:41.224 Test: blockdev write zeroes read split ...passed 00:11:41.224 Test: blockdev write zeroes read split partial ...passed 00:11:41.224 Test: blockdev reset ...passed 00:11:41.224 Test: blockdev write read 8 blocks ...passed 00:11:41.224 Test: blockdev write read size > 128k ...passed 00:11:41.224 Test: blockdev write read invalid size ...passed 00:11:41.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.224 Test: blockdev write read max offset ...passed 00:11:41.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.224 Test: blockdev writev readv 8 blocks ...passed 00:11:41.224 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.224 Test: blockdev writev readv block ...passed 00:11:41.224 Test: blockdev writev readv size > 128k ...passed 00:11:41.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.224 Test: blockdev comparev and writev ...passed 00:11:41.224 Test: blockdev nvme passthru rw ...passed 00:11:41.224 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.224 Test: blockdev nvme admin passthru ...passed 00:11:41.224 Test: blockdev copy ...passed 00:11:41.224 Suite: bdevio tests on: Malloc2p3 00:11:41.224 Test: blockdev write read block ...passed 00:11:41.224 Test: blockdev write zeroes read block ...passed 00:11:41.224 Test: blockdev write zeroes read no split ...passed 00:11:41.224 Test: blockdev write zeroes read split ...passed 00:11:41.224 Test: blockdev write zeroes read split partial ...passed 00:11:41.224 Test: blockdev reset ...passed 00:11:41.224 Test: blockdev write read 8 blocks ...passed 00:11:41.224 Test: blockdev write read size > 128k ...passed 00:11:41.224 Test: blockdev write read invalid size ...passed 00:11:41.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.224 Test: blockdev write read max offset ...passed 00:11:41.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.224 Test: blockdev writev readv 8 blocks ...passed 00:11:41.224 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.224 Test: blockdev writev readv block ...passed 00:11:41.224 Test: blockdev writev readv size > 128k ...passed 00:11:41.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.224 Test: blockdev comparev and writev ...passed 00:11:41.224 Test: blockdev nvme passthru rw ...passed 00:11:41.224 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.224 Test: blockdev nvme admin passthru ...passed 00:11:41.224 Test: blockdev copy ...passed 00:11:41.224 Suite: bdevio tests on: Malloc2p2 00:11:41.224 Test: blockdev write read block ...passed 00:11:41.224 Test: blockdev write zeroes read block ...passed 00:11:41.224 Test: blockdev write zeroes read no split ...passed 00:11:41.224 Test: blockdev write zeroes read split ...passed 00:11:41.224 Test: blockdev write zeroes read split partial ...passed 00:11:41.224 Test: blockdev reset ...passed 00:11:41.224 Test: blockdev write read 8 blocks ...passed 00:11:41.224 Test: blockdev write read size > 128k ...passed 00:11:41.224 Test: blockdev write read invalid size ...passed 00:11:41.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.224 Test: blockdev write read max offset ...passed 00:11:41.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.224 Test: blockdev writev readv 8 blocks ...passed 00:11:41.224 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.224 Test: blockdev writev readv block ...passed 00:11:41.224 Test: blockdev writev readv size > 128k ...passed 00:11:41.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.224 Test: blockdev comparev and writev ...passed 00:11:41.224 Test: blockdev nvme passthru rw ...passed 00:11:41.224 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.224 Test: blockdev nvme admin passthru ...passed 00:11:41.224 Test: blockdev copy ...passed 00:11:41.224 Suite: bdevio tests on: Malloc2p1 00:11:41.224 Test: blockdev write read block ...passed 00:11:41.224 Test: blockdev write zeroes read block ...passed 00:11:41.224 Test: blockdev write zeroes read no split ...passed 00:11:41.224 Test: blockdev write zeroes read split ...passed 00:11:41.484 Test: blockdev write zeroes read split partial ...passed 00:11:41.484 Test: blockdev reset ...passed 00:11:41.484 Test: blockdev write read 8 blocks ...passed 00:11:41.484 Test: blockdev write read size > 128k ...passed 00:11:41.484 Test: blockdev write read invalid size ...passed 00:11:41.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.484 Test: blockdev write read max offset ...passed 00:11:41.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.484 Test: blockdev writev readv 8 blocks ...passed 00:11:41.484 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.484 Test: blockdev writev readv block ...passed 00:11:41.484 Test: blockdev writev readv size > 128k ...passed 00:11:41.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.484 Test: blockdev comparev and writev ...passed 00:11:41.484 Test: blockdev nvme passthru rw ...passed 00:11:41.484 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.484 Test: blockdev nvme admin passthru ...passed 00:11:41.484 Test: blockdev copy ...passed 00:11:41.484 Suite: bdevio tests on: Malloc2p0 00:11:41.484 Test: blockdev write read block ...passed 00:11:41.484 Test: blockdev write zeroes read block ...passed 00:11:41.484 Test: blockdev write zeroes read no split ...passed 00:11:41.484 Test: blockdev write zeroes read split ...passed 00:11:41.484 Test: blockdev write zeroes read split partial ...passed 00:11:41.484 Test: blockdev reset ...passed 00:11:41.484 Test: blockdev write read 8 blocks ...passed 00:11:41.484 Test: blockdev write read size > 128k ...passed 00:11:41.484 Test: blockdev write read invalid size ...passed 00:11:41.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.484 Test: blockdev write read max offset ...passed 00:11:41.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.484 Test: blockdev writev readv 8 blocks ...passed 00:11:41.484 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.484 Test: blockdev writev readv block ...passed 00:11:41.484 Test: blockdev writev readv size > 128k ...passed 00:11:41.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.484 Test: blockdev comparev and writev ...passed 00:11:41.484 Test: blockdev nvme passthru rw ...passed 00:11:41.484 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.484 Test: blockdev nvme admin passthru ...passed 00:11:41.484 Test: blockdev copy ...passed 00:11:41.484 Suite: bdevio tests on: Malloc1p1 00:11:41.484 Test: blockdev write read block ...passed 00:11:41.484 Test: blockdev write zeroes read block ...passed 00:11:41.484 Test: blockdev write zeroes read no split ...passed 00:11:41.484 Test: blockdev write zeroes read split ...passed 00:11:41.484 Test: blockdev write zeroes read split partial ...passed 00:11:41.484 Test: blockdev reset ...passed 00:11:41.484 Test: blockdev write read 8 blocks ...passed 00:11:41.484 Test: blockdev write read size > 128k ...passed 00:11:41.484 Test: blockdev write read invalid size ...passed 00:11:41.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.484 Test: blockdev write read max offset ...passed 00:11:41.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.484 Test: blockdev writev readv 8 blocks ...passed 00:11:41.484 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.484 Test: blockdev writev readv block ...passed 00:11:41.484 Test: blockdev writev readv size > 128k ...passed 00:11:41.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.484 Test: blockdev comparev and writev ...passed 00:11:41.484 Test: blockdev nvme passthru rw ...passed 00:11:41.485 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.485 Test: blockdev nvme admin passthru ...passed 00:11:41.485 Test: blockdev copy ...passed 00:11:41.485 Suite: bdevio tests on: Malloc1p0 00:11:41.485 Test: blockdev write read block ...passed 00:11:41.485 Test: blockdev write zeroes read block ...passed 00:11:41.485 Test: blockdev write zeroes read no split ...passed 00:11:41.485 Test: blockdev write zeroes read split ...passed 00:11:41.485 Test: blockdev write zeroes read split partial ...passed 00:11:41.485 Test: blockdev reset ...passed 00:11:41.485 Test: blockdev write read 8 blocks ...passed 00:11:41.485 Test: blockdev write read size > 128k ...passed 00:11:41.485 Test: blockdev write read invalid size ...passed 00:11:41.485 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.485 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.485 Test: blockdev write read max offset ...passed 00:11:41.485 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.485 Test: blockdev writev readv 8 blocks ...passed 00:11:41.485 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.485 Test: blockdev writev readv block ...passed 00:11:41.485 Test: blockdev writev readv size > 128k ...passed 00:11:41.485 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.485 Test: blockdev comparev and writev ...passed 00:11:41.485 Test: blockdev nvme passthru rw ...passed 00:11:41.485 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.485 Test: blockdev nvme admin passthru ...passed 00:11:41.485 Test: blockdev copy ...passed 00:11:41.485 Suite: bdevio tests on: Malloc0 00:11:41.485 Test: blockdev write read block ...passed 00:11:41.485 Test: blockdev write zeroes read block ...passed 00:11:41.485 Test: blockdev write zeroes read no split ...passed 00:11:41.485 Test: blockdev write zeroes read split ...passed 00:11:41.485 Test: blockdev write zeroes read split partial ...passed 00:11:41.485 Test: blockdev reset ...passed 00:11:41.485 Test: blockdev write read 8 blocks ...passed 00:11:41.485 Test: blockdev write read size > 128k ...passed 00:11:41.485 Test: blockdev write read invalid size ...passed 00:11:41.485 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.485 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.485 Test: blockdev write read max offset ...passed 00:11:41.485 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.485 Test: blockdev writev readv 8 blocks ...passed 00:11:41.485 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.485 Test: blockdev writev readv block ...passed 00:11:41.485 Test: blockdev writev readv size > 128k ...passed 00:11:41.485 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.485 Test: blockdev comparev and writev ...passed 00:11:41.485 Test: blockdev nvme passthru rw ...passed 00:11:41.485 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.485 Test: blockdev nvme admin passthru ...passed 00:11:41.485 Test: blockdev copy ...passed 00:11:41.485 00:11:41.485 Run Summary: Type Total Ran Passed Failed Inactive 00:11:41.485 suites 16 16 n/a 0 0 00:11:41.485 tests 368 368 368 0 0 00:11:41.485 asserts 2224 2224 2224 0 n/a 00:11:41.485 00:11:41.485 Elapsed time = 2.892 seconds 00:11:41.485 0 00:11:41.745 04:52:05 -- bdev/blockdev.sh@293 -- # killprocess 65726 00:11:41.745 04:52:05 -- common/autotest_common.sh@936 -- # '[' -z 65726 ']' 00:11:41.746 04:52:05 -- common/autotest_common.sh@940 -- # kill -0 65726 00:11:41.746 04:52:05 -- common/autotest_common.sh@941 -- # uname 00:11:41.746 04:52:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.746 04:52:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65726 00:11:41.746 04:52:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:41.746 04:52:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:41.746 killing process with pid 65726 00:11:41.746 04:52:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65726' 00:11:41.746 04:52:05 -- common/autotest_common.sh@955 -- # kill 65726 00:11:41.746 04:52:05 -- common/autotest_common.sh@960 -- # wait 65726 00:11:43.652 04:52:06 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:43.652 00:11:43.652 real 0m4.600s 00:11:43.652 user 0m12.130s 00:11:43.652 sys 0m0.571s 00:11:43.652 04:52:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:43.652 04:52:06 -- common/autotest_common.sh@10 -- # set +x 00:11:43.652 ************************************ 00:11:43.652 END TEST bdev_bounds 00:11:43.652 ************************************ 00:11:43.652 04:52:06 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:43.652 04:52:06 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:43.652 04:52:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.652 04:52:06 -- common/autotest_common.sh@10 -- # set +x 00:11:43.652 ************************************ 00:11:43.652 START TEST bdev_nbd 00:11:43.652 ************************************ 00:11:43.652 04:52:06 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:43.652 04:52:06 -- bdev/blockdev.sh@298 -- # uname -s 00:11:43.652 04:52:06 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:43.652 04:52:06 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:43.652 04:52:06 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:43.652 04:52:06 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:43.652 04:52:06 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:43.652 04:52:06 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:11:43.652 04:52:06 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:43.652 04:52:06 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:43.652 04:52:06 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:43.652 04:52:06 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:11:43.652 04:52:06 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:43.652 04:52:06 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:43.652 04:52:06 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:43.652 04:52:06 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:43.652 04:52:06 -- bdev/blockdev.sh@316 -- # nbd_pid=65815 00:11:43.653 04:52:06 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:43.653 04:52:06 -- bdev/blockdev.sh@318 -- # waitforlisten 65815 /var/tmp/spdk-nbd.sock 00:11:43.653 04:52:06 -- common/autotest_common.sh@829 -- # '[' -z 65815 ']' 00:11:43.653 04:52:06 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:43.653 04:52:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:43.653 04:52:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:43.653 04:52:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:43.653 04:52:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.653 04:52:06 -- common/autotest_common.sh@10 -- # set +x 00:11:43.653 [2024-11-18 04:52:06.879274] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:43.653 [2024-11-18 04:52:06.879418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.653 [2024-11-18 04:52:07.049789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.912 [2024-11-18 04:52:07.222210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.171 [2024-11-18 04:52:07.559786] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:44.171 [2024-11-18 04:52:07.559864] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:44.171 [2024-11-18 04:52:07.567733] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:44.171 [2024-11-18 04:52:07.567796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:44.171 [2024-11-18 04:52:07.575748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.171 [2024-11-18 04:52:07.575802] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:44.171 [2024-11-18 04:52:07.575833] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:44.430 [2024-11-18 04:52:07.748047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.430 [2024-11-18 04:52:07.748127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.430 [2024-11-18 04:52:07.748152] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:11:44.430 [2024-11-18 04:52:07.748165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.430 [2024-11-18 04:52:07.750711] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.430 [2024-11-18 04:52:07.750765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:45.368 04:52:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.368 04:52:08 -- common/autotest_common.sh@862 -- # return 0 00:11:45.368 04:52:08 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@24 -- # local i 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:45.368 04:52:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:45.369 04:52:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:45.369 04:52:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:45.369 04:52:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:45.369 04:52:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:45.369 04:52:08 -- common/autotest_common.sh@867 -- # local i 00:11:45.369 04:52:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:45.369 04:52:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:45.369 04:52:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:45.369 04:52:08 -- common/autotest_common.sh@871 -- # break 00:11:45.369 04:52:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:45.369 04:52:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:45.369 04:52:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.369 1+0 records in 00:11:45.369 1+0 records out 00:11:45.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306479 s, 13.4 MB/s 00:11:45.369 04:52:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.369 04:52:08 -- common/autotest_common.sh@884 -- # size=4096 00:11:45.369 04:52:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.369 04:52:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:45.369 04:52:08 -- common/autotest_common.sh@887 -- # return 0 00:11:45.369 04:52:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:45.369 04:52:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:45.369 04:52:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:45.628 04:52:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:45.628 04:52:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:45.628 04:52:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:45.628 04:52:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:45.628 04:52:09 -- common/autotest_common.sh@867 -- # local i 00:11:45.628 04:52:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:45.628 04:52:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:45.628 04:52:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:45.628 04:52:09 -- common/autotest_common.sh@871 -- # break 00:11:45.628 04:52:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:45.628 04:52:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:45.628 04:52:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.628 1+0 records in 00:11:45.628 1+0 records out 00:11:45.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194415 s, 21.1 MB/s 00:11:45.628 04:52:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.628 04:52:09 -- common/autotest_common.sh@884 -- # size=4096 00:11:45.628 04:52:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.628 04:52:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:45.628 04:52:09 -- common/autotest_common.sh@887 -- # return 0 00:11:45.628 04:52:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:45.628 04:52:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:45.628 04:52:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:45.886 04:52:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:45.886 04:52:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:45.886 04:52:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:45.886 04:52:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:45.886 04:52:09 -- common/autotest_common.sh@867 -- # local i 00:11:45.886 04:52:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:45.886 04:52:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:45.886 04:52:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:45.886 04:52:09 -- common/autotest_common.sh@871 -- # break 00:11:45.886 04:52:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:45.886 04:52:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:45.886 04:52:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.886 1+0 records in 00:11:45.886 1+0 records out 00:11:45.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291116 s, 14.1 MB/s 00:11:45.886 04:52:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.886 04:52:09 -- common/autotest_common.sh@884 -- # size=4096 00:11:45.886 04:52:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.886 04:52:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:45.886 04:52:09 -- common/autotest_common.sh@887 -- # return 0 00:11:45.886 04:52:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:45.886 04:52:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:45.886 04:52:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:46.145 04:52:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:46.145 04:52:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:46.145 04:52:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:46.145 04:52:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:46.145 04:52:09 -- common/autotest_common.sh@867 -- # local i 00:11:46.145 04:52:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.145 04:52:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.145 04:52:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:46.145 04:52:09 -- common/autotest_common.sh@871 -- # break 00:11:46.145 04:52:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.145 04:52:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.145 04:52:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.145 1+0 records in 00:11:46.145 1+0 records out 00:11:46.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345191 s, 11.9 MB/s 00:11:46.145 04:52:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.145 04:52:09 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.145 04:52:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.145 04:52:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.145 04:52:09 -- common/autotest_common.sh@887 -- # return 0 00:11:46.145 04:52:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:46.145 04:52:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:46.145 04:52:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:46.404 04:52:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:46.404 04:52:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:46.404 04:52:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:46.404 04:52:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:46.404 04:52:09 -- common/autotest_common.sh@867 -- # local i 00:11:46.404 04:52:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.404 04:52:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.404 04:52:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:46.404 04:52:09 -- common/autotest_common.sh@871 -- # break 00:11:46.404 04:52:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.404 04:52:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.404 04:52:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.404 1+0 records in 00:11:46.404 1+0 records out 00:11:46.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318799 s, 12.8 MB/s 00:11:46.404 04:52:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.404 04:52:09 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.404 04:52:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.404 04:52:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.404 04:52:09 -- common/autotest_common.sh@887 -- # return 0 00:11:46.404 04:52:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:46.404 04:52:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:46.404 04:52:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:46.663 04:52:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:46.663 04:52:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:46.663 04:52:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:46.663 04:52:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:46.663 04:52:10 -- common/autotest_common.sh@867 -- # local i 00:11:46.663 04:52:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.663 04:52:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.663 04:52:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:46.663 04:52:10 -- common/autotest_common.sh@871 -- # break 00:11:46.663 04:52:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.663 04:52:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.663 04:52:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.663 1+0 records in 00:11:46.663 1+0 records out 00:11:46.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401754 s, 10.2 MB/s 00:11:46.663 04:52:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.663 04:52:10 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.663 04:52:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.663 04:52:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.663 04:52:10 -- common/autotest_common.sh@887 -- # return 0 00:11:46.663 04:52:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:46.663 04:52:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:46.663 04:52:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:46.922 04:52:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:46.922 04:52:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:46.922 04:52:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:46.922 04:52:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:46.922 04:52:10 -- common/autotest_common.sh@867 -- # local i 00:11:46.922 04:52:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.922 04:52:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.922 04:52:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:46.922 04:52:10 -- common/autotest_common.sh@871 -- # break 00:11:46.922 04:52:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.922 04:52:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.922 04:52:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.922 1+0 records in 00:11:46.922 1+0 records out 00:11:46.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341564 s, 12.0 MB/s 00:11:46.922 04:52:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.922 04:52:10 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.922 04:52:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.922 04:52:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.922 04:52:10 -- common/autotest_common.sh@887 -- # return 0 00:11:46.922 04:52:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:46.922 04:52:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:46.922 04:52:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:47.181 04:52:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:47.181 04:52:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:47.181 04:52:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:47.181 04:52:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:47.181 04:52:10 -- common/autotest_common.sh@867 -- # local i 00:11:47.181 04:52:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.181 04:52:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.181 04:52:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:47.181 04:52:10 -- common/autotest_common.sh@871 -- # break 00:11:47.181 04:52:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.181 04:52:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.181 04:52:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.181 1+0 records in 00:11:47.181 1+0 records out 00:11:47.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518934 s, 7.9 MB/s 00:11:47.181 04:52:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.181 04:52:10 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.181 04:52:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.181 04:52:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.181 04:52:10 -- common/autotest_common.sh@887 -- # return 0 00:11:47.181 04:52:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.181 04:52:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:47.181 04:52:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:47.440 04:52:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:47.440 04:52:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:47.699 04:52:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:47.699 04:52:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:47.699 04:52:10 -- common/autotest_common.sh@867 -- # local i 00:11:47.699 04:52:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.699 04:52:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.699 04:52:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:47.699 04:52:10 -- common/autotest_common.sh@871 -- # break 00:11:47.699 04:52:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.699 04:52:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.699 04:52:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.699 1+0 records in 00:11:47.699 1+0 records out 00:11:47.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455908 s, 9.0 MB/s 00:11:47.699 04:52:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.699 04:52:10 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.699 04:52:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.699 04:52:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.699 04:52:10 -- common/autotest_common.sh@887 -- # return 0 00:11:47.699 04:52:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.699 04:52:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:47.699 04:52:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:47.700 04:52:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:47.700 04:52:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:47.700 04:52:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:47.700 04:52:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:47.700 04:52:11 -- common/autotest_common.sh@867 -- # local i 00:11:47.700 04:52:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.700 04:52:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.700 04:52:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:47.700 04:52:11 -- common/autotest_common.sh@871 -- # break 00:11:47.700 04:52:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.700 04:52:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.700 04:52:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.700 1+0 records in 00:11:47.700 1+0 records out 00:11:47.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641317 s, 6.4 MB/s 00:11:47.700 04:52:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.700 04:52:11 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.700 04:52:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.700 04:52:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.700 04:52:11 -- common/autotest_common.sh@887 -- # return 0 00:11:47.700 04:52:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.700 04:52:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:47.700 04:52:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:47.959 04:52:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:47.959 04:52:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:47.959 04:52:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:47.959 04:52:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:47.959 04:52:11 -- common/autotest_common.sh@867 -- # local i 00:11:47.959 04:52:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.959 04:52:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.959 04:52:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:47.959 04:52:11 -- common/autotest_common.sh@871 -- # break 00:11:47.959 04:52:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.959 04:52:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.959 04:52:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.959 1+0 records in 00:11:47.959 1+0 records out 00:11:47.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867985 s, 4.7 MB/s 00:11:47.959 04:52:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.959 04:52:11 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.959 04:52:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.959 04:52:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.959 04:52:11 -- common/autotest_common.sh@887 -- # return 0 00:11:47.959 04:52:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.959 04:52:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:47.959 04:52:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:48.219 04:52:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:48.219 04:52:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:48.219 04:52:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:48.219 04:52:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:48.219 04:52:11 -- common/autotest_common.sh@867 -- # local i 00:11:48.219 04:52:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:48.219 04:52:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:48.219 04:52:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:48.219 04:52:11 -- common/autotest_common.sh@871 -- # break 00:11:48.219 04:52:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:48.219 04:52:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:48.219 04:52:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.219 1+0 records in 00:11:48.219 1+0 records out 00:11:48.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604402 s, 6.8 MB/s 00:11:48.478 04:52:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.478 04:52:11 -- common/autotest_common.sh@884 -- # size=4096 00:11:48.478 04:52:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.478 04:52:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:48.478 04:52:11 -- common/autotest_common.sh@887 -- # return 0 00:11:48.478 04:52:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:48.478 04:52:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:48.478 04:52:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:48.737 04:52:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:48.737 04:52:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:48.737 04:52:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:48.737 04:52:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:48.737 04:52:12 -- common/autotest_common.sh@867 -- # local i 00:11:48.737 04:52:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:48.737 04:52:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:48.737 04:52:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:48.737 04:52:12 -- common/autotest_common.sh@871 -- # break 00:11:48.737 04:52:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:48.737 04:52:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:48.737 04:52:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.737 1+0 records in 00:11:48.737 1+0 records out 00:11:48.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547201 s, 7.5 MB/s 00:11:48.737 04:52:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.737 04:52:12 -- common/autotest_common.sh@884 -- # size=4096 00:11:48.737 04:52:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.737 04:52:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:48.737 04:52:12 -- common/autotest_common.sh@887 -- # return 0 00:11:48.737 04:52:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:48.737 04:52:12 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:48.737 04:52:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:48.996 04:52:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:48.996 04:52:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:48.996 04:52:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:48.996 04:52:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:48.996 04:52:12 -- common/autotest_common.sh@867 -- # local i 00:11:48.996 04:52:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:48.996 04:52:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:48.996 04:52:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:48.996 04:52:12 -- common/autotest_common.sh@871 -- # break 00:11:48.996 04:52:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:48.996 04:52:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:48.996 04:52:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.996 1+0 records in 00:11:48.996 1+0 records out 00:11:48.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748129 s, 5.5 MB/s 00:11:48.996 04:52:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.996 04:52:12 -- common/autotest_common.sh@884 -- # size=4096 00:11:48.996 04:52:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.996 04:52:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:48.996 04:52:12 -- common/autotest_common.sh@887 -- # return 0 00:11:48.996 04:52:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:48.996 04:52:12 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:48.996 04:52:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:49.255 04:52:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:49.255 04:52:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:49.255 04:52:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:49.255 04:52:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:49.255 04:52:12 -- common/autotest_common.sh@867 -- # local i 00:11:49.255 04:52:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:49.255 04:52:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:49.255 04:52:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:49.255 04:52:12 -- common/autotest_common.sh@871 -- # break 00:11:49.255 04:52:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:49.255 04:52:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:49.255 04:52:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.255 1+0 records in 00:11:49.255 1+0 records out 00:11:49.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655437 s, 6.2 MB/s 00:11:49.255 04:52:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.255 04:52:12 -- common/autotest_common.sh@884 -- # size=4096 00:11:49.255 04:52:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.255 04:52:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:49.255 04:52:12 -- common/autotest_common.sh@887 -- # return 0 00:11:49.255 04:52:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:49.255 04:52:12 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:49.255 04:52:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:49.514 04:52:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:49.514 04:52:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:49.514 04:52:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:49.514 04:52:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:49.514 04:52:12 -- common/autotest_common.sh@867 -- # local i 00:11:49.514 04:52:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:49.514 04:52:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:49.514 04:52:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:49.514 04:52:12 -- common/autotest_common.sh@871 -- # break 00:11:49.514 04:52:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:49.514 04:52:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:49.514 04:52:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.514 1+0 records in 00:11:49.514 1+0 records out 00:11:49.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109466 s, 3.7 MB/s 00:11:49.514 04:52:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.514 04:52:12 -- common/autotest_common.sh@884 -- # size=4096 00:11:49.514 04:52:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.514 04:52:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:49.514 04:52:12 -- common/autotest_common.sh@887 -- # return 0 00:11:49.514 04:52:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:49.514 04:52:12 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:49.514 04:52:12 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:49.773 04:52:13 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd0", 00:11:49.773 "bdev_name": "Malloc0" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd1", 00:11:49.773 "bdev_name": "Malloc1p0" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd2", 00:11:49.773 "bdev_name": "Malloc1p1" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd3", 00:11:49.773 "bdev_name": "Malloc2p0" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd4", 00:11:49.773 "bdev_name": "Malloc2p1" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd5", 00:11:49.773 "bdev_name": "Malloc2p2" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd6", 00:11:49.773 "bdev_name": "Malloc2p3" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd7", 00:11:49.773 "bdev_name": "Malloc2p4" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd8", 00:11:49.773 "bdev_name": "Malloc2p5" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd9", 00:11:49.773 "bdev_name": "Malloc2p6" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd10", 00:11:49.773 "bdev_name": "Malloc2p7" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd11", 00:11:49.773 "bdev_name": "TestPT" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd12", 00:11:49.773 "bdev_name": "raid0" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd13", 00:11:49.773 "bdev_name": "concat0" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd14", 00:11:49.773 "bdev_name": "raid1" 00:11:49.773 }, 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd15", 00:11:49.773 "bdev_name": "AIO0" 00:11:49.773 } 00:11:49.773 ]' 00:11:49.773 04:52:13 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:49.773 04:52:13 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:49.773 { 00:11:49.773 "nbd_device": "/dev/nbd0", 00:11:49.773 "bdev_name": "Malloc0" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd1", 00:11:49.774 "bdev_name": "Malloc1p0" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd2", 00:11:49.774 "bdev_name": "Malloc1p1" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd3", 00:11:49.774 "bdev_name": "Malloc2p0" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd4", 00:11:49.774 "bdev_name": "Malloc2p1" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd5", 00:11:49.774 "bdev_name": "Malloc2p2" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd6", 00:11:49.774 "bdev_name": "Malloc2p3" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd7", 00:11:49.774 "bdev_name": "Malloc2p4" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd8", 00:11:49.774 "bdev_name": "Malloc2p5" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd9", 00:11:49.774 "bdev_name": "Malloc2p6" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd10", 00:11:49.774 "bdev_name": "Malloc2p7" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd11", 00:11:49.774 "bdev_name": "TestPT" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd12", 00:11:49.774 "bdev_name": "raid0" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd13", 00:11:49.774 "bdev_name": "concat0" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd14", 00:11:49.774 "bdev_name": "raid1" 00:11:49.774 }, 00:11:49.774 { 00:11:49.774 "nbd_device": "/dev/nbd15", 00:11:49.774 "bdev_name": "AIO0" 00:11:49.774 } 00:11:49.774 ]' 00:11:49.774 04:52:13 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:49.774 04:52:13 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:49.774 04:52:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:49.774 04:52:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:49.774 04:52:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:49.774 04:52:13 -- bdev/nbd_common.sh@51 -- # local i 00:11:49.774 04:52:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.774 04:52:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:50.033 04:52:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:50.033 04:52:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:50.033 04:52:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:50.033 04:52:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.033 04:52:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.033 04:52:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:50.033 04:52:13 -- bdev/nbd_common.sh@41 -- # break 00:11:50.033 04:52:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.033 04:52:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.033 04:52:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@41 -- # break 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@41 -- # break 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.291 04:52:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:50.550 04:52:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:50.550 04:52:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:50.550 04:52:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:50.550 04:52:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.550 04:52:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.550 04:52:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:50.550 04:52:14 -- bdev/nbd_common.sh@41 -- # break 00:11:50.550 04:52:14 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.550 04:52:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.550 04:52:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:50.808 04:52:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@41 -- # break 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@41 -- # break 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.066 04:52:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:51.325 04:52:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:51.325 04:52:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:51.325 04:52:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:51.325 04:52:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.325 04:52:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.325 04:52:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:51.325 04:52:14 -- bdev/nbd_common.sh@41 -- # break 00:11:51.325 04:52:14 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.325 04:52:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.325 04:52:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:51.583 04:52:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:51.583 04:52:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:51.583 04:52:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:51.583 04:52:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.583 04:52:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.583 04:52:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:51.583 04:52:15 -- bdev/nbd_common.sh@41 -- # break 00:11:51.583 04:52:15 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.583 04:52:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.583 04:52:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:51.842 04:52:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:51.842 04:52:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:51.842 04:52:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:51.842 04:52:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.842 04:52:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.842 04:52:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:51.842 04:52:15 -- bdev/nbd_common.sh@41 -- # break 00:11:51.842 04:52:15 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.842 04:52:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.842 04:52:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:52.101 04:52:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:52.101 04:52:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:52.101 04:52:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:52.101 04:52:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.101 04:52:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.101 04:52:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:52.101 04:52:15 -- bdev/nbd_common.sh@41 -- # break 00:11:52.101 04:52:15 -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.101 04:52:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.101 04:52:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:52.361 04:52:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:52.361 04:52:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:52.361 04:52:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:52.361 04:52:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.361 04:52:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.361 04:52:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:52.361 04:52:15 -- bdev/nbd_common.sh@41 -- # break 00:11:52.361 04:52:15 -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.361 04:52:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.361 04:52:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:52.620 04:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:52.620 04:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:52.620 04:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:52.620 04:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.620 04:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.620 04:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:52.620 04:52:16 -- bdev/nbd_common.sh@41 -- # break 00:11:52.620 04:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.620 04:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.620 04:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:52.879 04:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:52.879 04:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:52.879 04:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:52.879 04:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.879 04:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.879 04:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:52.879 04:52:16 -- bdev/nbd_common.sh@41 -- # break 00:11:52.879 04:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.879 04:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.879 04:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:53.138 04:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:53.138 04:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:53.138 04:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:53.138 04:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.138 04:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.138 04:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:53.138 04:52:16 -- bdev/nbd_common.sh@41 -- # break 00:11:53.138 04:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.138 04:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.138 04:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:53.397 04:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:53.397 04:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:53.397 04:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:53.397 04:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.397 04:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.397 04:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:53.397 04:52:16 -- bdev/nbd_common.sh@41 -- # break 00:11:53.397 04:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.397 04:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.397 04:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@41 -- # break 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.656 04:52:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@65 -- # true 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@65 -- # count=0 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@122 -- # count=0 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@127 -- # return 0 00:11:53.915 04:52:17 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@12 -- # local i 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:53.915 04:52:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:54.173 /dev/nbd0 00:11:54.173 04:52:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:54.173 04:52:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:54.173 04:52:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:54.173 04:52:17 -- common/autotest_common.sh@867 -- # local i 00:11:54.173 04:52:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:54.173 04:52:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:54.173 04:52:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:54.173 04:52:17 -- common/autotest_common.sh@871 -- # break 00:11:54.173 04:52:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:54.173 04:52:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:54.173 04:52:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.173 1+0 records in 00:11:54.173 1+0 records out 00:11:54.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028838 s, 14.2 MB/s 00:11:54.173 04:52:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.173 04:52:17 -- common/autotest_common.sh@884 -- # size=4096 00:11:54.173 04:52:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.173 04:52:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:54.173 04:52:17 -- common/autotest_common.sh@887 -- # return 0 00:11:54.173 04:52:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.173 04:52:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.173 04:52:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:54.432 /dev/nbd1 00:11:54.432 04:52:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:54.432 04:52:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:54.432 04:52:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:54.432 04:52:17 -- common/autotest_common.sh@867 -- # local i 00:11:54.432 04:52:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:54.432 04:52:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:54.432 04:52:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:54.432 04:52:17 -- common/autotest_common.sh@871 -- # break 00:11:54.432 04:52:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:54.432 04:52:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:54.432 04:52:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.432 1+0 records in 00:11:54.432 1+0 records out 00:11:54.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317804 s, 12.9 MB/s 00:11:54.432 04:52:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.432 04:52:17 -- common/autotest_common.sh@884 -- # size=4096 00:11:54.432 04:52:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.432 04:52:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:54.432 04:52:17 -- common/autotest_common.sh@887 -- # return 0 00:11:54.432 04:52:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.432 04:52:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.432 04:52:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:54.432 /dev/nbd10 00:11:54.432 04:52:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:54.432 04:52:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:54.432 04:52:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:54.432 04:52:17 -- common/autotest_common.sh@867 -- # local i 00:11:54.432 04:52:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:54.691 04:52:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:54.691 04:52:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:54.691 04:52:17 -- common/autotest_common.sh@871 -- # break 00:11:54.691 04:52:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:54.691 04:52:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:54.691 04:52:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.691 1+0 records in 00:11:54.691 1+0 records out 00:11:54.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257351 s, 15.9 MB/s 00:11:54.691 04:52:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.691 04:52:17 -- common/autotest_common.sh@884 -- # size=4096 00:11:54.691 04:52:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.691 04:52:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:54.691 04:52:17 -- common/autotest_common.sh@887 -- # return 0 00:11:54.691 04:52:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.691 04:52:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.691 04:52:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:54.691 /dev/nbd11 00:11:54.691 04:52:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:54.691 04:52:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:54.691 04:52:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:54.691 04:52:18 -- common/autotest_common.sh@867 -- # local i 00:11:54.691 04:52:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:54.691 04:52:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:54.691 04:52:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:54.691 04:52:18 -- common/autotest_common.sh@871 -- # break 00:11:54.691 04:52:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:54.691 04:52:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:54.691 04:52:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.691 1+0 records in 00:11:54.691 1+0 records out 00:11:54.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348426 s, 11.8 MB/s 00:11:54.691 04:52:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.691 04:52:18 -- common/autotest_common.sh@884 -- # size=4096 00:11:54.691 04:52:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.691 04:52:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:54.691 04:52:18 -- common/autotest_common.sh@887 -- # return 0 00:11:54.691 04:52:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.691 04:52:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.691 04:52:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:54.951 /dev/nbd12 00:11:54.951 04:52:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:54.951 04:52:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:54.951 04:52:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:54.951 04:52:18 -- common/autotest_common.sh@867 -- # local i 00:11:54.951 04:52:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:54.951 04:52:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:54.951 04:52:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:54.951 04:52:18 -- common/autotest_common.sh@871 -- # break 00:11:54.951 04:52:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:54.951 04:52:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:54.951 04:52:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.951 1+0 records in 00:11:54.951 1+0 records out 00:11:54.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039624 s, 10.3 MB/s 00:11:54.951 04:52:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.951 04:52:18 -- common/autotest_common.sh@884 -- # size=4096 00:11:54.951 04:52:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.951 04:52:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:54.951 04:52:18 -- common/autotest_common.sh@887 -- # return 0 00:11:54.951 04:52:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.951 04:52:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.951 04:52:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:55.279 /dev/nbd13 00:11:55.279 04:52:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:55.279 04:52:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:55.279 04:52:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:55.279 04:52:18 -- common/autotest_common.sh@867 -- # local i 00:11:55.279 04:52:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.279 04:52:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.279 04:52:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:55.279 04:52:18 -- common/autotest_common.sh@871 -- # break 00:11:55.279 04:52:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.279 04:52:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.280 04:52:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.280 1+0 records in 00:11:55.280 1+0 records out 00:11:55.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436122 s, 9.4 MB/s 00:11:55.280 04:52:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.280 04:52:18 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.280 04:52:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.280 04:52:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.280 04:52:18 -- common/autotest_common.sh@887 -- # return 0 00:11:55.280 04:52:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.280 04:52:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.280 04:52:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:55.582 /dev/nbd14 00:11:55.582 04:52:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:55.582 04:52:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:55.582 04:52:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:55.582 04:52:18 -- common/autotest_common.sh@867 -- # local i 00:11:55.582 04:52:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.582 04:52:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.582 04:52:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:55.582 04:52:18 -- common/autotest_common.sh@871 -- # break 00:11:55.582 04:52:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.582 04:52:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.582 04:52:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.582 1+0 records in 00:11:55.582 1+0 records out 00:11:55.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350501 s, 11.7 MB/s 00:11:55.582 04:52:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.582 04:52:18 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.582 04:52:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.582 04:52:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.582 04:52:18 -- common/autotest_common.sh@887 -- # return 0 00:11:55.582 04:52:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.582 04:52:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.582 04:52:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:55.858 /dev/nbd15 00:11:55.858 04:52:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:55.858 04:52:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:55.858 04:52:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:55.858 04:52:19 -- common/autotest_common.sh@867 -- # local i 00:11:55.858 04:52:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.858 04:52:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.858 04:52:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:55.858 04:52:19 -- common/autotest_common.sh@871 -- # break 00:11:55.858 04:52:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.858 04:52:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.858 04:52:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.858 1+0 records in 00:11:55.858 1+0 records out 00:11:55.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422834 s, 9.7 MB/s 00:11:55.858 04:52:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.858 04:52:19 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.858 04:52:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.858 04:52:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.858 04:52:19 -- common/autotest_common.sh@887 -- # return 0 00:11:55.858 04:52:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.858 04:52:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.858 04:52:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:56.117 /dev/nbd2 00:11:56.117 04:52:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:56.117 04:52:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:56.117 04:52:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:56.117 04:52:19 -- common/autotest_common.sh@867 -- # local i 00:11:56.117 04:52:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:56.117 04:52:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:56.117 04:52:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:56.117 04:52:19 -- common/autotest_common.sh@871 -- # break 00:11:56.117 04:52:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:56.117 04:52:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:56.117 04:52:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.117 1+0 records in 00:11:56.117 1+0 records out 00:11:56.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444715 s, 9.2 MB/s 00:11:56.117 04:52:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.117 04:52:19 -- common/autotest_common.sh@884 -- # size=4096 00:11:56.117 04:52:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.117 04:52:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:56.117 04:52:19 -- common/autotest_common.sh@887 -- # return 0 00:11:56.117 04:52:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.117 04:52:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:56.117 04:52:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:56.376 /dev/nbd3 00:11:56.376 04:52:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:56.376 04:52:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:56.376 04:52:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:56.376 04:52:19 -- common/autotest_common.sh@867 -- # local i 00:11:56.376 04:52:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:56.376 04:52:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:56.376 04:52:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:56.376 04:52:19 -- common/autotest_common.sh@871 -- # break 00:11:56.376 04:52:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:56.376 04:52:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:56.376 04:52:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.376 1+0 records in 00:11:56.376 1+0 records out 00:11:56.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052568 s, 7.8 MB/s 00:11:56.376 04:52:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.376 04:52:19 -- common/autotest_common.sh@884 -- # size=4096 00:11:56.376 04:52:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.376 04:52:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:56.376 04:52:19 -- common/autotest_common.sh@887 -- # return 0 00:11:56.376 04:52:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.376 04:52:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:56.376 04:52:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:56.642 /dev/nbd4 00:11:56.642 04:52:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:56.642 04:52:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:56.642 04:52:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:56.642 04:52:20 -- common/autotest_common.sh@867 -- # local i 00:11:56.642 04:52:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:56.642 04:52:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:56.642 04:52:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:56.642 04:52:20 -- common/autotest_common.sh@871 -- # break 00:11:56.642 04:52:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:56.642 04:52:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:56.642 04:52:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.642 1+0 records in 00:11:56.642 1+0 records out 00:11:56.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580051 s, 7.1 MB/s 00:11:56.642 04:52:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.642 04:52:20 -- common/autotest_common.sh@884 -- # size=4096 00:11:56.642 04:52:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.642 04:52:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:56.642 04:52:20 -- common/autotest_common.sh@887 -- # return 0 00:11:56.642 04:52:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.642 04:52:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:56.642 04:52:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:56.901 /dev/nbd5 00:11:56.901 04:52:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:56.901 04:52:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:56.901 04:52:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:56.901 04:52:20 -- common/autotest_common.sh@867 -- # local i 00:11:56.901 04:52:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:56.901 04:52:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:56.901 04:52:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:56.901 04:52:20 -- common/autotest_common.sh@871 -- # break 00:11:56.901 04:52:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:56.901 04:52:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:56.901 04:52:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.901 1+0 records in 00:11:56.901 1+0 records out 00:11:56.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454785 s, 9.0 MB/s 00:11:56.901 04:52:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.901 04:52:20 -- common/autotest_common.sh@884 -- # size=4096 00:11:56.901 04:52:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.901 04:52:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:56.901 04:52:20 -- common/autotest_common.sh@887 -- # return 0 00:11:56.901 04:52:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.901 04:52:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:56.901 04:52:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:57.161 /dev/nbd6 00:11:57.161 04:52:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:57.161 04:52:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:57.161 04:52:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:57.161 04:52:20 -- common/autotest_common.sh@867 -- # local i 00:11:57.161 04:52:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.161 04:52:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.161 04:52:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:57.161 04:52:20 -- common/autotest_common.sh@871 -- # break 00:11:57.161 04:52:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.161 04:52:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.161 04:52:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.161 1+0 records in 00:11:57.161 1+0 records out 00:11:57.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452523 s, 9.1 MB/s 00:11:57.161 04:52:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.161 04:52:20 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.161 04:52:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.161 04:52:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.161 04:52:20 -- common/autotest_common.sh@887 -- # return 0 00:11:57.161 04:52:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.161 04:52:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:57.161 04:52:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:57.420 /dev/nbd7 00:11:57.420 04:52:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:57.420 04:52:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:57.420 04:52:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:57.420 04:52:20 -- common/autotest_common.sh@867 -- # local i 00:11:57.420 04:52:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.420 04:52:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.420 04:52:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:57.420 04:52:20 -- common/autotest_common.sh@871 -- # break 00:11:57.420 04:52:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.420 04:52:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.420 04:52:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.420 1+0 records in 00:11:57.420 1+0 records out 00:11:57.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000822275 s, 5.0 MB/s 00:11:57.420 04:52:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.420 04:52:20 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.420 04:52:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.420 04:52:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.420 04:52:20 -- common/autotest_common.sh@887 -- # return 0 00:11:57.420 04:52:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.420 04:52:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:57.420 04:52:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:57.679 /dev/nbd8 00:11:57.679 04:52:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:57.679 04:52:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:57.679 04:52:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:57.679 04:52:21 -- common/autotest_common.sh@867 -- # local i 00:11:57.679 04:52:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.679 04:52:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.679 04:52:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:57.679 04:52:21 -- common/autotest_common.sh@871 -- # break 00:11:57.679 04:52:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.679 04:52:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.679 04:52:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.679 1+0 records in 00:11:57.679 1+0 records out 00:11:57.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840486 s, 4.9 MB/s 00:11:57.679 04:52:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.679 04:52:21 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.679 04:52:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.679 04:52:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.679 04:52:21 -- common/autotest_common.sh@887 -- # return 0 00:11:57.679 04:52:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.679 04:52:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:57.679 04:52:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:57.939 /dev/nbd9 00:11:57.939 04:52:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:57.939 04:52:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:57.939 04:52:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:57.939 04:52:21 -- common/autotest_common.sh@867 -- # local i 00:11:57.939 04:52:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.939 04:52:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.939 04:52:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:57.939 04:52:21 -- common/autotest_common.sh@871 -- # break 00:11:57.939 04:52:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.939 04:52:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.939 04:52:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.939 1+0 records in 00:11:57.939 1+0 records out 00:11:57.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010046 s, 4.1 MB/s 00:11:57.939 04:52:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.939 04:52:21 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.939 04:52:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.939 04:52:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.939 04:52:21 -- common/autotest_common.sh@887 -- # return 0 00:11:57.939 04:52:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.939 04:52:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:57.939 04:52:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:57.939 04:52:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.939 04:52:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd0", 00:11:58.197 "bdev_name": "Malloc0" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd1", 00:11:58.197 "bdev_name": "Malloc1p0" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd10", 00:11:58.197 "bdev_name": "Malloc1p1" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd11", 00:11:58.197 "bdev_name": "Malloc2p0" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd12", 00:11:58.197 "bdev_name": "Malloc2p1" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd13", 00:11:58.197 "bdev_name": "Malloc2p2" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd14", 00:11:58.197 "bdev_name": "Malloc2p3" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd15", 00:11:58.197 "bdev_name": "Malloc2p4" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd2", 00:11:58.197 "bdev_name": "Malloc2p5" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd3", 00:11:58.197 "bdev_name": "Malloc2p6" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd4", 00:11:58.197 "bdev_name": "Malloc2p7" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd5", 00:11:58.197 "bdev_name": "TestPT" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd6", 00:11:58.197 "bdev_name": "raid0" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd7", 00:11:58.197 "bdev_name": "concat0" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd8", 00:11:58.197 "bdev_name": "raid1" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd9", 00:11:58.197 "bdev_name": "AIO0" 00:11:58.197 } 00:11:58.197 ]' 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd0", 00:11:58.197 "bdev_name": "Malloc0" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd1", 00:11:58.197 "bdev_name": "Malloc1p0" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd10", 00:11:58.197 "bdev_name": "Malloc1p1" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd11", 00:11:58.197 "bdev_name": "Malloc2p0" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd12", 00:11:58.197 "bdev_name": "Malloc2p1" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd13", 00:11:58.197 "bdev_name": "Malloc2p2" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd14", 00:11:58.197 "bdev_name": "Malloc2p3" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd15", 00:11:58.197 "bdev_name": "Malloc2p4" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd2", 00:11:58.197 "bdev_name": "Malloc2p5" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd3", 00:11:58.197 "bdev_name": "Malloc2p6" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd4", 00:11:58.197 "bdev_name": "Malloc2p7" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd5", 00:11:58.197 "bdev_name": "TestPT" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd6", 00:11:58.197 "bdev_name": "raid0" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd7", 00:11:58.197 "bdev_name": "concat0" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd8", 00:11:58.197 "bdev_name": "raid1" 00:11:58.197 }, 00:11:58.197 { 00:11:58.197 "nbd_device": "/dev/nbd9", 00:11:58.197 "bdev_name": "AIO0" 00:11:58.197 } 00:11:58.197 ]' 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:58.197 /dev/nbd1 00:11:58.197 /dev/nbd10 00:11:58.197 /dev/nbd11 00:11:58.197 /dev/nbd12 00:11:58.197 /dev/nbd13 00:11:58.197 /dev/nbd14 00:11:58.197 /dev/nbd15 00:11:58.197 /dev/nbd2 00:11:58.197 /dev/nbd3 00:11:58.197 /dev/nbd4 00:11:58.197 /dev/nbd5 00:11:58.197 /dev/nbd6 00:11:58.197 /dev/nbd7 00:11:58.197 /dev/nbd8 00:11:58.197 /dev/nbd9' 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:58.197 /dev/nbd1 00:11:58.197 /dev/nbd10 00:11:58.197 /dev/nbd11 00:11:58.197 /dev/nbd12 00:11:58.197 /dev/nbd13 00:11:58.197 /dev/nbd14 00:11:58.197 /dev/nbd15 00:11:58.197 /dev/nbd2 00:11:58.197 /dev/nbd3 00:11:58.197 /dev/nbd4 00:11:58.197 /dev/nbd5 00:11:58.197 /dev/nbd6 00:11:58.197 /dev/nbd7 00:11:58.197 /dev/nbd8 00:11:58.197 /dev/nbd9' 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@65 -- # count=16 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@66 -- # echo 16 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@95 -- # count=16 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:58.197 04:52:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:58.197 256+0 records in 00:11:58.197 256+0 records out 00:11:58.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00691121 s, 152 MB/s 00:11:58.198 04:52:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.198 04:52:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:58.455 256+0 records in 00:11:58.455 256+0 records out 00:11:58.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158832 s, 6.6 MB/s 00:11:58.455 04:52:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.455 04:52:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:58.752 256+0 records in 00:11:58.752 256+0 records out 00:11:58.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168417 s, 6.2 MB/s 00:11:58.752 04:52:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.752 04:52:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:58.752 256+0 records in 00:11:58.752 256+0 records out 00:11:58.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150472 s, 7.0 MB/s 00:11:58.752 04:52:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.753 04:52:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:59.010 256+0 records in 00:11:59.010 256+0 records out 00:11:59.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160267 s, 6.5 MB/s 00:11:59.010 04:52:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.010 04:52:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:59.010 256+0 records in 00:11:59.010 256+0 records out 00:11:59.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164773 s, 6.4 MB/s 00:11:59.010 04:52:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.010 04:52:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:59.268 256+0 records in 00:11:59.268 256+0 records out 00:11:59.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16304 s, 6.4 MB/s 00:11:59.268 04:52:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.268 04:52:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:59.525 256+0 records in 00:11:59.525 256+0 records out 00:11:59.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161415 s, 6.5 MB/s 00:11:59.525 04:52:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.525 04:52:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:59.525 256+0 records in 00:11:59.525 256+0 records out 00:11:59.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148175 s, 7.1 MB/s 00:11:59.525 04:52:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.525 04:52:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:59.783 256+0 records in 00:11:59.783 256+0 records out 00:11:59.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15762 s, 6.7 MB/s 00:11:59.783 04:52:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.783 04:52:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:59.783 256+0 records in 00:11:59.783 256+0 records out 00:11:59.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161321 s, 6.5 MB/s 00:11:59.783 04:52:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.783 04:52:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:00.042 256+0 records in 00:12:00.042 256+0 records out 00:12:00.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160575 s, 6.5 MB/s 00:12:00.042 04:52:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:00.042 04:52:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:00.300 256+0 records in 00:12:00.300 256+0 records out 00:12:00.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166793 s, 6.3 MB/s 00:12:00.300 04:52:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:00.300 04:52:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:00.300 256+0 records in 00:12:00.300 256+0 records out 00:12:00.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168441 s, 6.2 MB/s 00:12:00.300 04:52:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:00.300 04:52:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:00.559 256+0 records in 00:12:00.559 256+0 records out 00:12:00.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169073 s, 6.2 MB/s 00:12:00.559 04:52:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:00.559 04:52:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:00.819 256+0 records in 00:12:00.819 256+0 records out 00:12:00.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171014 s, 6.1 MB/s 00:12:00.819 04:52:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:00.819 04:52:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:01.078 256+0 records in 00:12:01.078 256+0 records out 00:12:01.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.25587 s, 4.1 MB/s 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:01.078 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@51 -- # local i 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.079 04:52:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:01.648 04:52:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:01.648 04:52:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:01.648 04:52:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:01.648 04:52:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.648 04:52:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.648 04:52:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:01.648 04:52:24 -- bdev/nbd_common.sh@41 -- # break 00:12:01.648 04:52:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.648 04:52:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.648 04:52:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:01.908 04:52:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:01.908 04:52:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:01.908 04:52:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:01.908 04:52:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.908 04:52:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.908 04:52:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:01.908 04:52:25 -- bdev/nbd_common.sh@41 -- # break 00:12:01.908 04:52:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.908 04:52:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.908 04:52:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:02.167 04:52:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:02.167 04:52:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:02.167 04:52:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:02.167 04:52:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.167 04:52:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.167 04:52:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:02.167 04:52:25 -- bdev/nbd_common.sh@41 -- # break 00:12:02.167 04:52:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.167 04:52:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.167 04:52:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:02.426 04:52:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:02.426 04:52:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:02.426 04:52:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:02.426 04:52:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.426 04:52:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.426 04:52:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:02.426 04:52:25 -- bdev/nbd_common.sh@41 -- # break 00:12:02.426 04:52:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.426 04:52:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.426 04:52:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:02.685 04:52:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:02.685 04:52:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:02.685 04:52:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:02.685 04:52:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.685 04:52:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.685 04:52:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:02.685 04:52:26 -- bdev/nbd_common.sh@41 -- # break 00:12:02.685 04:52:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.685 04:52:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.685 04:52:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:02.944 04:52:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:02.944 04:52:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:02.944 04:52:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:02.944 04:52:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.944 04:52:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.944 04:52:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:02.944 04:52:26 -- bdev/nbd_common.sh@41 -- # break 00:12:02.944 04:52:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.944 04:52:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.944 04:52:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:03.203 04:52:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:03.203 04:52:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:03.203 04:52:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:03.203 04:52:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.203 04:52:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.203 04:52:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:03.203 04:52:26 -- bdev/nbd_common.sh@41 -- # break 00:12:03.203 04:52:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.203 04:52:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.203 04:52:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:03.462 04:52:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:03.462 04:52:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:03.462 04:52:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:03.462 04:52:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.462 04:52:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.462 04:52:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:03.462 04:52:26 -- bdev/nbd_common.sh@41 -- # break 00:12:03.462 04:52:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.462 04:52:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.462 04:52:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:03.721 04:52:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:03.721 04:52:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:03.721 04:52:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:03.721 04:52:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.721 04:52:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.721 04:52:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:03.721 04:52:27 -- bdev/nbd_common.sh@41 -- # break 00:12:03.721 04:52:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.721 04:52:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.721 04:52:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:03.982 04:52:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:03.982 04:52:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:03.982 04:52:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:03.982 04:52:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.982 04:52:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.982 04:52:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:03.982 04:52:27 -- bdev/nbd_common.sh@41 -- # break 00:12:03.982 04:52:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.982 04:52:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.982 04:52:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:04.241 04:52:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:04.241 04:52:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:04.241 04:52:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:04.241 04:52:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.241 04:52:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.241 04:52:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:04.241 04:52:27 -- bdev/nbd_common.sh@41 -- # break 00:12:04.241 04:52:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.241 04:52:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.241 04:52:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:04.500 04:52:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:04.500 04:52:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:04.500 04:52:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:04.500 04:52:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.500 04:52:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.500 04:52:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:04.500 04:52:27 -- bdev/nbd_common.sh@41 -- # break 00:12:04.500 04:52:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.500 04:52:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.500 04:52:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:04.759 04:52:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:04.759 04:52:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:04.759 04:52:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:04.759 04:52:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.759 04:52:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.759 04:52:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:04.759 04:52:28 -- bdev/nbd_common.sh@41 -- # break 00:12:04.759 04:52:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.759 04:52:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.759 04:52:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:05.018 04:52:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:05.018 04:52:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:05.018 04:52:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:05.018 04:52:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.018 04:52:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.018 04:52:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:05.018 04:52:28 -- bdev/nbd_common.sh@41 -- # break 00:12:05.018 04:52:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.018 04:52:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.018 04:52:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:05.278 04:52:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:05.278 04:52:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:05.278 04:52:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:05.278 04:52:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.278 04:52:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.278 04:52:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:05.278 04:52:28 -- bdev/nbd_common.sh@41 -- # break 00:12:05.278 04:52:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.278 04:52:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.278 04:52:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@41 -- # break 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.537 04:52:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@65 -- # true 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@65 -- # count=0 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@104 -- # count=0 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@109 -- # return 0 00:12:05.796 04:52:29 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:05.796 04:52:29 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:06.055 malloc_lvol_verify 00:12:06.055 04:52:29 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:06.314 735ae61f-ad6f-4192-bad8-c7cbd228e9f6 00:12:06.314 04:52:29 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:06.573 c835e64e-3b1d-4b4b-a923-4d70f8eafc1d 00:12:06.573 04:52:29 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:06.832 /dev/nbd0 00:12:06.832 04:52:30 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:06.832 mke2fs 1.47.0 (5-Feb-2023) 00:12:06.832 00:12:06.832 Filesystem too small for a journal 00:12:06.832 Discarding device blocks: 0/1024 done 00:12:06.832 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:06.832 00:12:06.832 Allocating group tables: 0/1 done 00:12:06.832 Writing inode tables: 0/1 done 00:12:06.832 Writing superblocks and filesystem accounting information: 0/1 done 00:12:06.832 00:12:06.832 04:52:30 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:06.832 04:52:30 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:06.832 04:52:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.832 04:52:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:06.832 04:52:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.832 04:52:30 -- bdev/nbd_common.sh@51 -- # local i 00:12:06.832 04:52:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.832 04:52:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:07.092 04:52:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:07.092 04:52:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:07.092 04:52:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:07.092 04:52:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.092 04:52:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.092 04:52:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:07.092 04:52:30 -- bdev/nbd_common.sh@41 -- # break 00:12:07.092 04:52:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.092 04:52:30 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:07.092 04:52:30 -- bdev/nbd_common.sh@147 -- # return 0 00:12:07.092 04:52:30 -- bdev/blockdev.sh@324 -- # killprocess 65815 00:12:07.092 04:52:30 -- common/autotest_common.sh@936 -- # '[' -z 65815 ']' 00:12:07.092 04:52:30 -- common/autotest_common.sh@940 -- # kill -0 65815 00:12:07.092 04:52:30 -- common/autotest_common.sh@941 -- # uname 00:12:07.092 04:52:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:07.092 04:52:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65815 00:12:07.092 04:52:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:07.092 killing process with pid 65815 00:12:07.092 04:52:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:07.092 04:52:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65815' 00:12:07.092 04:52:30 -- common/autotest_common.sh@955 -- # kill 65815 00:12:07.092 04:52:30 -- common/autotest_common.sh@960 -- # wait 65815 00:12:09.629 04:52:32 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:09.629 00:12:09.629 real 0m25.956s 00:12:09.629 user 0m35.978s 00:12:09.629 sys 0m9.090s 00:12:09.629 ************************************ 00:12:09.629 END TEST bdev_nbd 00:12:09.629 ************************************ 00:12:09.629 04:52:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:09.629 04:52:32 -- common/autotest_common.sh@10 -- # set +x 00:12:09.629 04:52:32 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:09.629 04:52:32 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:09.629 04:52:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:09.629 04:52:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.629 04:52:32 -- common/autotest_common.sh@10 -- # set +x 00:12:09.629 ************************************ 00:12:09.629 START TEST bdev_fio 00:12:09.629 ************************************ 00:12:09.629 04:52:32 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@329 -- # local env_context 00:12:09.629 04:52:32 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:09.629 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:09.629 04:52:32 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:09.629 04:52:32 -- bdev/blockdev.sh@337 -- # echo '' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:09.629 04:52:32 -- bdev/blockdev.sh@337 -- # env_context= 00:12:09.629 04:52:32 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:09.629 04:52:32 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:09.629 04:52:32 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:09.629 04:52:32 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:09.629 04:52:32 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:09.629 04:52:32 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:09.629 04:52:32 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:09.629 04:52:32 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:09.629 04:52:32 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:09.629 04:52:32 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:09.629 04:52:32 -- common/autotest_common.sh@1290 -- # cat 00:12:09.629 04:52:32 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:09.629 04:52:32 -- common/autotest_common.sh@1303 -- # cat 00:12:09.629 04:52:32 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:09.629 04:52:32 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:09.629 04:52:32 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:09.629 04:52:32 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:12:09.629 04:52:32 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:09.629 04:52:32 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:12:09.629 04:52:32 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:09.629 04:52:32 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:09.629 04:52:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:09.629 04:52:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.629 04:52:32 -- common/autotest_common.sh@10 -- # set +x 00:12:09.629 ************************************ 00:12:09.629 START TEST bdev_fio_rw_verify 00:12:09.630 ************************************ 00:12:09.630 04:52:32 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:09.630 04:52:32 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:09.630 04:52:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:09.630 04:52:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:09.630 04:52:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:09.630 04:52:32 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:09.630 04:52:32 -- common/autotest_common.sh@1330 -- # shift 00:12:09.630 04:52:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:09.630 04:52:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:09.630 04:52:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:09.630 04:52:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:09.630 04:52:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:09.630 04:52:32 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:12:09.630 04:52:32 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:12:09.630 04:52:32 -- common/autotest_common.sh@1336 -- # break 00:12:09.630 04:52:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:09.630 04:52:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:09.630 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:09.630 fio-3.35 00:12:09.630 Starting 16 threads 00:12:21.889 00:12:21.889 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=66954: Mon Nov 18 04:52:44 2024 00:12:21.889 read: IOPS=81.4k, BW=318MiB/s (333MB/s)(3179MiB/10003msec) 00:12:21.889 slat (usec): min=2, max=15048, avg=35.42, stdev=242.65 00:12:21.889 clat (usec): min=11, max=16273, avg=273.96, stdev=688.18 00:12:21.889 lat (usec): min=29, max=16295, avg=309.38, stdev=727.93 00:12:21.889 clat percentiles (usec): 00:12:21.889 | 50.000th=[ 163], 99.000th=[ 4228], 99.900th=[ 7308], 99.990th=[11207], 00:12:21.889 | 99.999th=[15270] 00:12:21.889 write: IOPS=129k, BW=505MiB/s (530MB/s)(4976MiB/9844msec); 0 zone resets 00:12:21.889 slat (usec): min=6, max=22029, avg=60.59, stdev=324.44 00:12:21.889 clat (usec): min=11, max=22359, avg=353.43, stdev=780.84 00:12:21.889 lat (usec): min=42, max=22390, avg=414.02, stdev=842.35 00:12:21.889 clat percentiles (usec): 00:12:21.889 | 50.000th=[ 212], 99.000th=[ 4293], 99.900th=[ 7439], 99.990th=[12256], 00:12:21.889 | 99.999th=[15401] 00:12:21.889 bw ( KiB/s): min=329832, max=785846, per=98.56%, avg=510100.84, stdev=8074.87, samples=304 00:12:21.889 iops : min=82457, max=196461, avg=127524.74, stdev=2018.72, samples=304 00:12:21.889 lat (usec) : 20=0.01%, 50=0.68%, 100=14.27%, 250=56.79%, 500=23.85% 00:12:21.889 lat (usec) : 750=1.11%, 1000=0.17% 00:12:21.889 lat (msec) : 2=0.12%, 4=1.09%, 10=1.87%, 20=0.03%, 50=0.01% 00:12:21.889 cpu : usr=57.83%, sys=2.35%, ctx=236341, majf=0, minf=105497 00:12:21.889 IO depths : 1=11.3%, 2=24.1%, 4=51.6%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.889 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.889 issued rwts: total=813792,1273744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.889 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:21.889 00:12:21.889 Run status group 0 (all jobs): 00:12:21.889 READ: bw=318MiB/s (333MB/s), 318MiB/s-318MiB/s (333MB/s-333MB/s), io=3179MiB (3333MB), run=10003-10003msec 00:12:21.889 WRITE: bw=505MiB/s (530MB/s), 505MiB/s-505MiB/s (530MB/s-530MB/s), io=4976MiB (5217MB), run=9844-9844msec 00:12:23.798 ----------------------------------------------------- 00:12:23.798 Suppressions used: 00:12:23.798 count bytes template 00:12:23.798 16 140 /usr/src/fio/parse.c 00:12:23.798 9499 911904 /usr/src/fio/iolog.c 00:12:23.798 1 904 libcrypto.so 00:12:23.798 ----------------------------------------------------- 00:12:23.798 00:12:23.798 00:12:23.798 real 0m14.013s 00:12:23.798 user 1m37.449s 00:12:23.798 sys 0m4.714s 00:12:23.798 ************************************ 00:12:23.798 END TEST bdev_fio_rw_verify 00:12:23.798 ************************************ 00:12:23.798 04:52:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:23.798 04:52:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.798 04:52:46 -- bdev/blockdev.sh@348 -- # rm -f 00:12:23.798 04:52:46 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:23.798 04:52:47 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:23.798 04:52:47 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:23.798 04:52:47 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:23.798 04:52:47 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:23.798 04:52:47 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:23.798 04:52:47 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:23.798 04:52:47 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:23.798 04:52:47 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:23.798 04:52:47 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:23.798 04:52:47 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:23.798 04:52:47 -- common/autotest_common.sh@1290 -- # cat 00:12:23.798 04:52:47 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:23.798 04:52:47 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:23.798 04:52:47 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:23.798 04:52:47 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:23.799 04:52:47 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2c3b0bb8-fcaf-4dc4-bb7d-36ebb24938dc"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2c3b0bb8-fcaf-4dc4-bb7d-36ebb24938dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f0594ba0-8bd9-573f-85c0-4780d2a95551"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f0594ba0-8bd9-573f-85c0-4780d2a95551",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f9479ca7-ce3d-5aeb-860b-1de2ecbf4e6c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f9479ca7-ce3d-5aeb-860b-1de2ecbf4e6c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9cdde703-3a2b-5611-8f59-da00df5eaf1e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9cdde703-3a2b-5611-8f59-da00df5eaf1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "1ebb1277-c0a1-52d0-930c-93e16a22321f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1ebb1277-c0a1-52d0-930c-93e16a22321f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "0634d059-c94f-536c-9176-812de044e15c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0634d059-c94f-536c-9176-812de044e15c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "20f06192-ebd7-54d9-b889-6cda7e4bd821"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20f06192-ebd7-54d9-b889-6cda7e4bd821",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cda27556-0891-5cf6-abe5-ae61e8b72d63"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cda27556-0891-5cf6-abe5-ae61e8b72d63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "c64ed296-f326-5c3a-a315-b04d1205f634"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c64ed296-f326-5c3a-a315-b04d1205f634",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "da5e6121-b379-5e17-ab19-481eebf2ce09"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "da5e6121-b379-5e17-ab19-481eebf2ce09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c120acbd-5d09-5fba-8494-58ffd454eed2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c120acbd-5d09-5fba-8494-58ffd454eed2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "cbcf9a3c-fd98-56a9-b9b8-7796785bbca1"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cbcf9a3c-fd98-56a9-b9b8-7796785bbca1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2a17c6be-bdcb-4642-bc2f-404ffc28a628"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2a17c6be-bdcb-4642-bc2f-404ffc28a628",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a17c6be-bdcb-4642-bc2f-404ffc28a628",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "cbd61335-fa97-4443-88e1-bfbd2fd39b26",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f0bd2ef3-4b1b-4afa-9a39-b8a91612f7cd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "7c1bdec2-a7c7-400c-93bf-33103b8abc5c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7c1bdec2-a7c7-400c-93bf-33103b8abc5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7c1bdec2-a7c7-400c-93bf-33103b8abc5c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "4521ff8b-683c-499c-b0a0-3abdcc10e5d5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e7fb89f7-2a71-4946-9242-b1c9592b4cd5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "192f6de6-eae3-449e-b39f-544a873016e3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "192f6de6-eae3-449e-b39f-544a873016e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "192f6de6-eae3-449e-b39f-544a873016e3",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "1d4399d5-b6ee-44a4-9dc1-b6d5072bc3b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "8cb732f7-36f0-45d7-b5e7-ad5d92215636",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "18ea5e56-5565-4668-a896-40808c2fa663"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "18ea5e56-5565-4668-a896-40808c2fa663",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:23.799 04:52:47 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:23.799 Malloc1p0 00:12:23.799 Malloc1p1 00:12:23.799 Malloc2p0 00:12:23.799 Malloc2p1 00:12:23.799 Malloc2p2 00:12:23.799 Malloc2p3 00:12:23.799 Malloc2p4 00:12:23.799 Malloc2p5 00:12:23.799 Malloc2p6 00:12:23.799 Malloc2p7 00:12:23.799 TestPT 00:12:23.799 raid0 00:12:23.799 concat0 ]] 00:12:23.799 04:52:47 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2c3b0bb8-fcaf-4dc4-bb7d-36ebb24938dc"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2c3b0bb8-fcaf-4dc4-bb7d-36ebb24938dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f0594ba0-8bd9-573f-85c0-4780d2a95551"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f0594ba0-8bd9-573f-85c0-4780d2a95551",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f9479ca7-ce3d-5aeb-860b-1de2ecbf4e6c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f9479ca7-ce3d-5aeb-860b-1de2ecbf4e6c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9cdde703-3a2b-5611-8f59-da00df5eaf1e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9cdde703-3a2b-5611-8f59-da00df5eaf1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "1ebb1277-c0a1-52d0-930c-93e16a22321f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1ebb1277-c0a1-52d0-930c-93e16a22321f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "0634d059-c94f-536c-9176-812de044e15c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0634d059-c94f-536c-9176-812de044e15c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "20f06192-ebd7-54d9-b889-6cda7e4bd821"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20f06192-ebd7-54d9-b889-6cda7e4bd821",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cda27556-0891-5cf6-abe5-ae61e8b72d63"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cda27556-0891-5cf6-abe5-ae61e8b72d63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "c64ed296-f326-5c3a-a315-b04d1205f634"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c64ed296-f326-5c3a-a315-b04d1205f634",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "da5e6121-b379-5e17-ab19-481eebf2ce09"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "da5e6121-b379-5e17-ab19-481eebf2ce09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c120acbd-5d09-5fba-8494-58ffd454eed2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c120acbd-5d09-5fba-8494-58ffd454eed2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "cbcf9a3c-fd98-56a9-b9b8-7796785bbca1"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cbcf9a3c-fd98-56a9-b9b8-7796785bbca1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2a17c6be-bdcb-4642-bc2f-404ffc28a628"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2a17c6be-bdcb-4642-bc2f-404ffc28a628",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a17c6be-bdcb-4642-bc2f-404ffc28a628",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "cbd61335-fa97-4443-88e1-bfbd2fd39b26",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f0bd2ef3-4b1b-4afa-9a39-b8a91612f7cd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "7c1bdec2-a7c7-400c-93bf-33103b8abc5c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7c1bdec2-a7c7-400c-93bf-33103b8abc5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7c1bdec2-a7c7-400c-93bf-33103b8abc5c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "4521ff8b-683c-499c-b0a0-3abdcc10e5d5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e7fb89f7-2a71-4946-9242-b1c9592b4cd5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "192f6de6-eae3-449e-b39f-544a873016e3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "192f6de6-eae3-449e-b39f-544a873016e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "192f6de6-eae3-449e-b39f-544a873016e3",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "1d4399d5-b6ee-44a4-9dc1-b6d5072bc3b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "8cb732f7-36f0-45d7-b5e7-ad5d92215636",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "18ea5e56-5565-4668-a896-40808c2fa663"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "18ea5e56-5565-4668-a896-40808c2fa663",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:23.800 04:52:47 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:23.800 04:52:47 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:23.800 04:52:47 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:23.800 04:52:47 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:23.800 04:52:47 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:23.800 04:52:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.800 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:23.800 ************************************ 00:12:23.800 START TEST bdev_fio_trim 00:12:23.800 ************************************ 00:12:23.800 04:52:47 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:23.800 04:52:47 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:23.800 04:52:47 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:23.800 04:52:47 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:23.800 04:52:47 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:23.800 04:52:47 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:23.800 04:52:47 -- common/autotest_common.sh@1330 -- # shift 00:12:23.800 04:52:47 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:23.800 04:52:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:23.800 04:52:47 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:23.800 04:52:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:23.800 04:52:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:23.800 04:52:47 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:12:23.801 04:52:47 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:12:23.801 04:52:47 -- common/autotest_common.sh@1336 -- # break 00:12:23.801 04:52:47 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:23.801 04:52:47 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:23.801 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.801 fio-3.35 00:12:23.801 Starting 14 threads 00:12:36.005 00:12:36.005 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=67153: Mon Nov 18 04:52:58 2024 00:12:36.005 write: IOPS=165k, BW=646MiB/s (677MB/s)(6456MiB/10001msec); 0 zone resets 00:12:36.005 slat (usec): min=3, max=7408, avg=30.32, stdev=185.33 00:12:36.005 clat (usec): min=21, max=8411, avg=219.46, stdev=548.47 00:12:36.005 lat (usec): min=31, max=8416, avg=249.78, stdev=577.36 00:12:36.005 clat percentiles (usec): 00:12:36.005 | 50.000th=[ 143], 99.000th=[ 4178], 99.900th=[ 6128], 99.990th=[ 7242], 00:12:36.005 | 99.999th=[ 7570] 00:12:36.005 bw ( KiB/s): min=506240, max=804720, per=99.98%, avg=660842.89, stdev=8634.00, samples=266 00:12:36.005 iops : min=126559, max=201180, avg=165210.37, stdev=2158.49, samples=266 00:12:36.005 trim: IOPS=165k, BW=646MiB/s (677MB/s)(6456MiB/10001msec); 0 zone resets 00:12:36.005 slat (usec): min=4, max=7242, avg=20.39, stdev=153.10 00:12:36.005 clat (usec): min=4, max=8237, avg=226.65, stdev=504.31 00:12:36.005 lat (usec): min=15, max=8253, avg=247.04, stdev=526.33 00:12:36.005 clat percentiles (usec): 00:12:36.005 | 50.000th=[ 163], 99.000th=[ 4113], 99.900th=[ 6128], 99.990th=[ 7242], 00:12:36.005 | 99.999th=[ 7373] 00:12:36.005 bw ( KiB/s): min=506288, max=804712, per=99.98%, avg=660842.89, stdev=8633.91, samples=266 00:12:36.006 iops : min=126571, max=201178, avg=165210.37, stdev=2158.47, samples=266 00:12:36.006 lat (usec) : 10=0.03%, 20=0.09%, 50=0.70%, 100=15.17%, 250=78.81% 00:12:36.006 lat (usec) : 500=3.43%, 750=0.03%, 1000=0.01% 00:12:36.006 lat (msec) : 2=0.02%, 4=0.46%, 10=1.26% 00:12:36.006 cpu : usr=69.45%, sys=0.07%, ctx=147712, majf=0, minf=15505 00:12:36.006 IO depths : 1=12.4%, 2=24.8%, 4=50.1%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.006 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.006 issued rwts: total=0,1652677,1652682,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.006 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.006 00:12:36.006 Run status group 0 (all jobs): 00:12:36.006 WRITE: bw=646MiB/s (677MB/s), 646MiB/s-646MiB/s (677MB/s-677MB/s), io=6456MiB (6769MB), run=10001-10001msec 00:12:36.006 TRIM: bw=646MiB/s (677MB/s), 646MiB/s-646MiB/s (677MB/s-677MB/s), io=6456MiB (6769MB), run=10001-10001msec 00:12:37.382 ----------------------------------------------------- 00:12:37.382 Suppressions used: 00:12:37.382 count bytes template 00:12:37.382 14 129 /usr/src/fio/parse.c 00:12:37.382 1 904 libcrypto.so 00:12:37.382 ----------------------------------------------------- 00:12:37.382 00:12:37.382 00:12:37.382 real 0m13.524s 00:12:37.382 user 1m42.060s 00:12:37.382 sys 0m0.583s 00:12:37.382 04:53:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:37.382 04:53:00 -- common/autotest_common.sh@10 -- # set +x 00:12:37.382 ************************************ 00:12:37.382 END TEST bdev_fio_trim 00:12:37.382 ************************************ 00:12:37.382 04:53:00 -- bdev/blockdev.sh@366 -- # rm -f 00:12:37.382 04:53:00 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:37.382 /home/vagrant/spdk_repo/spdk 00:12:37.382 04:53:00 -- bdev/blockdev.sh@368 -- # popd 00:12:37.382 04:53:00 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:37.382 00:12:37.382 real 0m27.806s 00:12:37.382 user 3m19.607s 00:12:37.382 sys 0m5.438s 00:12:37.382 04:53:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:37.382 04:53:00 -- common/autotest_common.sh@10 -- # set +x 00:12:37.382 ************************************ 00:12:37.382 END TEST bdev_fio 00:12:37.382 ************************************ 00:12:37.382 04:53:00 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:37.382 04:53:00 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:37.382 04:53:00 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:37.383 04:53:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:37.383 04:53:00 -- common/autotest_common.sh@10 -- # set +x 00:12:37.383 ************************************ 00:12:37.383 START TEST bdev_verify 00:12:37.383 ************************************ 00:12:37.383 04:53:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:37.383 [2024-11-18 04:53:00.743890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:37.383 [2024-11-18 04:53:00.744055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67330 ] 00:12:37.641 [2024-11-18 04:53:00.909907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:37.641 [2024-11-18 04:53:01.131069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.641 [2024-11-18 04:53:01.131069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.209 [2024-11-18 04:53:01.484729] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:38.209 [2024-11-18 04:53:01.484836] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:38.209 [2024-11-18 04:53:01.492692] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:38.209 [2024-11-18 04:53:01.492771] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:38.209 [2024-11-18 04:53:01.500720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:38.209 [2024-11-18 04:53:01.500777] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:38.209 [2024-11-18 04:53:01.500825] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:38.209 [2024-11-18 04:53:01.670285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:38.209 [2024-11-18 04:53:01.670395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.209 [2024-11-18 04:53:01.670421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:38.209 [2024-11-18 04:53:01.670460] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.209 [2024-11-18 04:53:01.673258] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.209 [2024-11-18 04:53:01.673315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:38.777 Running I/O for 5 seconds... 00:12:44.053 00:12:44.053 Latency(us) 00:12:44.053 [2024-11-18T04:53:07.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x0 length 0x1000 00:12:44.053 Malloc0 : 5.17 1627.53 6.36 0.00 0.00 78137.00 2427.81 231639.97 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x1000 length 0x1000 00:12:44.053 Malloc0 : 5.17 1603.42 6.26 0.00 0.00 79292.07 2353.34 233546.47 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x0 length 0x800 00:12:44.053 Malloc1p0 : 5.17 1129.94 4.41 0.00 0.00 112397.04 4557.73 136314.88 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x800 length 0x800 00:12:44.053 Malloc1p0 : 5.17 1115.41 4.36 0.00 0.00 113898.81 4557.73 136314.88 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x0 length 0x800 00:12:44.053 Malloc1p1 : 5.17 1129.48 4.41 0.00 0.00 112243.07 4408.79 131548.63 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x800 length 0x800 00:12:44.053 Malloc1p1 : 5.17 1115.12 4.36 0.00 0.00 113722.19 4408.79 131548.63 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x0 length 0x200 00:12:44.053 Malloc2p0 : 5.18 1129.02 4.41 0.00 0.00 112084.98 4408.79 127735.62 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x200 length 0x200 00:12:44.053 Malloc2p0 : 5.17 1114.84 4.35 0.00 0.00 113544.77 4408.79 127735.62 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x0 length 0x200 00:12:44.053 Malloc2p1 : 5.18 1128.56 4.41 0.00 0.00 111923.62 4289.63 123922.62 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x200 length 0x200 00:12:44.053 Malloc2p1 : 5.17 1114.54 4.35 0.00 0.00 113375.28 4289.63 123922.62 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x0 length 0x200 00:12:44.053 Malloc2p2 : 5.18 1128.10 4.41 0.00 0.00 111772.24 4021.53 121539.49 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x200 length 0x200 00:12:44.053 Malloc2p2 : 5.17 1114.01 4.35 0.00 0.00 113214.63 4021.53 121539.49 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x0 length 0x200 00:12:44.053 Malloc2p3 : 5.18 1127.65 4.40 0.00 0.00 111649.85 4230.05 118679.74 00:12:44.053 [2024-11-18T04:53:07.577Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.053 Verification LBA range: start 0x200 length 0x200 00:12:44.054 Malloc2p3 : 5.18 1113.48 4.35 0.00 0.00 113086.38 4289.63 118203.11 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x0 length 0x200 00:12:44.054 Malloc2p4 : 5.19 1127.13 4.40 0.00 0.00 111488.33 4170.47 115819.99 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x200 length 0x200 00:12:44.054 Malloc2p4 : 5.18 1112.95 4.35 0.00 0.00 112940.22 4170.47 115819.99 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x0 length 0x200 00:12:44.054 Malloc2p5 : 5.19 1126.55 4.40 0.00 0.00 111351.78 4081.11 112960.23 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x200 length 0x200 00:12:44.054 Malloc2p5 : 5.18 1112.42 4.35 0.00 0.00 112786.77 4110.89 112960.23 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x0 length 0x200 00:12:44.054 Malloc2p6 : 5.19 1125.93 4.40 0.00 0.00 111213.40 4408.79 110100.48 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x200 length 0x200 00:12:44.054 Malloc2p6 : 5.18 1111.90 4.34 0.00 0.00 112648.34 4349.21 110100.48 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x0 length 0x200 00:12:44.054 Malloc2p7 : 5.19 1125.65 4.40 0.00 0.00 111034.35 3991.74 107717.35 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x200 length 0x200 00:12:44.054 Malloc2p7 : 5.19 1111.36 4.34 0.00 0.00 112494.84 3932.16 107240.73 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x0 length 0x1000 00:12:44.054 TestPT : 5.19 1112.25 4.34 0.00 0.00 112225.81 8638.84 107240.73 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x1000 length 0x1000 00:12:44.054 TestPT : 5.19 1094.39 4.27 0.00 0.00 113946.74 10664.49 108193.98 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x0 length 0x2000 00:12:44.054 raid0 : 5.20 1125.06 4.39 0.00 0.00 110637.78 4408.79 95325.09 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x2000 length 0x2000 00:12:44.054 raid0 : 5.20 1124.60 4.39 0.00 0.00 111212.84 4200.26 97231.59 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x0 length 0x2000 00:12:44.054 concat0 : 5.20 1141.24 4.46 0.00 0.00 109390.46 3604.48 90558.84 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x2000 length 0x2000 00:12:44.054 concat0 : 5.20 1124.31 4.39 0.00 0.00 111036.30 4736.47 92465.34 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x0 length 0x1000 00:12:44.054 raid1 : 5.20 1140.61 4.46 0.00 0.00 109224.00 4796.04 85315.96 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x1000 length 0x1000 00:12:44.054 raid1 : 5.20 1124.03 4.39 0.00 0.00 110865.07 4498.15 87222.46 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x0 length 0x4e2 00:12:44.054 AIO0 : 5.21 1139.41 4.45 0.00 0.00 109126.11 3991.74 82456.20 00:12:44.054 [2024-11-18T04:53:07.578Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:44.054 Verification LBA range: start 0x4e2 length 0x4e2 00:12:44.054 AIO0 : 5.20 1123.40 4.39 0.00 0.00 110682.65 4081.11 83409.45 00:12:44.054 [2024-11-18T04:53:07.578Z] =================================================================================================================== 00:12:44.054 [2024-11-18T04:53:07.578Z] Total : 36894.30 144.12 0.00 0.00 108998.10 2353.34 233546.47 00:12:45.960 00:12:45.960 real 0m8.613s 00:12:45.960 user 0m15.571s 00:12:45.960 sys 0m0.591s 00:12:45.960 04:53:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.960 04:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:45.960 ************************************ 00:12:45.960 END TEST bdev_verify 00:12:45.960 ************************************ 00:12:45.960 04:53:09 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:45.960 04:53:09 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:45.960 04:53:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.960 04:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:45.960 ************************************ 00:12:45.960 START TEST bdev_verify_big_io 00:12:45.960 ************************************ 00:12:45.960 04:53:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:45.960 [2024-11-18 04:53:09.408835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:45.960 [2024-11-18 04:53:09.408993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67439 ] 00:12:46.219 [2024-11-18 04:53:09.569985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:46.478 [2024-11-18 04:53:09.745727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.478 [2024-11-18 04:53:09.745743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.737 [2024-11-18 04:53:10.082063] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.737 [2024-11-18 04:53:10.082141] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.737 [2024-11-18 04:53:10.090032] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.737 [2024-11-18 04:53:10.090097] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.737 [2024-11-18 04:53:10.098056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:46.737 [2024-11-18 04:53:10.098101] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:46.737 [2024-11-18 04:53:10.098133] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:46.995 [2024-11-18 04:53:10.270811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:46.995 [2024-11-18 04:53:10.270935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.995 [2024-11-18 04:53:10.270961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:46.995 [2024-11-18 04:53:10.270974] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.995 [2024-11-18 04:53:10.273602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.995 [2024-11-18 04:53:10.273644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:47.254 [2024-11-18 04:53:10.585503] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.588618] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.592238] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.595760] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.598931] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.602372] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.605514] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.609027] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.612196] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.615757] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.618958] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.622181] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.625401] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.628900] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:47.254 [2024-11-18 04:53:10.632491] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:47.255 [2024-11-18 04:53:10.635614] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:47.255 [2024-11-18 04:53:10.706747] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:47.255 [2024-11-18 04:53:10.713107] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:47.255 Running I/O for 5 seconds... 00:12:53.895 00:12:53.895 Latency(us) 00:12:53.895 [2024-11-18T04:53:17.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x0 length 0x100 00:12:53.895 Malloc0 : 5.73 305.00 19.06 0.00 0.00 407825.17 30504.03 1105771.05 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x100 length 0x100 00:12:53.895 Malloc0 : 5.58 290.56 18.16 0.00 0.00 422019.11 27405.96 1258291.20 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x0 length 0x80 00:12:53.895 Malloc1p0 : 5.85 174.08 10.88 0.00 0.00 698263.43 49330.73 1334551.27 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x80 length 0x80 00:12:53.895 Malloc1p0 : 5.58 225.90 14.12 0.00 0.00 538374.08 46470.98 1128649.08 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x0 length 0x80 00:12:53.895 Malloc1p1 : 6.00 105.43 6.59 0.00 0.00 1122037.31 51475.55 2364062.25 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x80 length 0x80 00:12:53.895 Malloc1p1 : 5.82 114.72 7.17 0.00 0.00 1052524.21 48854.11 2348810.24 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x0 length 0x20 00:12:53.895 Malloc2p0 : 5.73 57.22 3.58 0.00 0.00 515420.93 8698.41 880803.84 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x20 length 0x20 00:12:53.895 Malloc2p0 : 5.71 61.16 3.82 0.00 0.00 487407.74 8281.37 754974.72 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x0 length 0x20 00:12:53.895 Malloc2p1 : 5.73 57.21 3.58 0.00 0.00 512839.27 8579.26 861738.82 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x20 length 0x20 00:12:53.895 Malloc2p1 : 5.71 61.15 3.82 0.00 0.00 485178.25 7357.91 739722.71 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x0 length 0x20 00:12:53.895 Malloc2p2 : 5.73 57.20 3.58 0.00 0.00 510235.45 8698.41 846486.81 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x20 length 0x20 00:12:53.895 Malloc2p2 : 5.71 61.14 3.82 0.00 0.00 482843.78 7804.74 724470.69 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x0 length 0x20 00:12:53.895 Malloc2p3 : 5.80 60.20 3.76 0.00 0.00 489647.21 9592.09 827421.79 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x20 length 0x20 00:12:53.895 Malloc2p3 : 5.71 61.12 3.82 0.00 0.00 480513.32 8162.21 705405.67 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x0 length 0x20 00:12:53.895 Malloc2p4 : 5.80 60.19 3.76 0.00 0.00 487040.88 8340.95 804543.77 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x20 length 0x20 00:12:53.895 Malloc2p4 : 5.71 61.11 3.82 0.00 0.00 478255.23 8936.73 686340.65 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.895 Verification LBA range: start 0x0 length 0x20 00:12:53.895 Malloc2p5 : 5.80 60.18 3.76 0.00 0.00 484574.70 8400.52 789291.75 00:12:53.895 [2024-11-18T04:53:17.419Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x20 length 0x20 00:12:53.896 Malloc2p5 : 5.71 61.10 3.82 0.00 0.00 475778.73 7685.59 671088.64 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x0 length 0x20 00:12:53.896 Malloc2p6 : 5.80 60.17 3.76 0.00 0.00 482000.46 7864.32 770226.73 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x20 length 0x20 00:12:53.896 Malloc2p6 : 5.75 64.04 4.00 0.00 0.00 456419.22 8638.84 652023.62 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x0 length 0x20 00:12:53.896 Malloc2p7 : 5.80 60.15 3.76 0.00 0.00 479451.24 9472.93 751161.72 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x20 length 0x20 00:12:53.896 Malloc2p7 : 5.75 64.03 4.00 0.00 0.00 454156.06 8221.79 636771.61 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x0 length 0x100 00:12:53.896 TestPT : 6.09 109.73 6.86 0.00 0.00 1017183.83 51713.86 2318306.21 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x100 length 0x100 00:12:53.896 TestPT : 5.93 108.19 6.76 0.00 0.00 1048635.02 65774.31 2165786.07 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x0 length 0x200 00:12:53.896 raid0 : 6.06 114.72 7.17 0.00 0.00 961893.60 52190.49 2318306.21 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x200 length 0x200 00:12:53.896 raid0 : 5.96 116.53 7.28 0.00 0.00 957695.53 47424.23 2333558.23 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x0 length 0x200 00:12:53.896 concat0 : 6.03 120.05 7.50 0.00 0.00 902450.47 33363.78 2318306.21 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x200 length 0x200 00:12:53.896 concat0 : 5.97 121.37 7.59 0.00 0.00 904017.77 22520.55 2318306.21 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x0 length 0x100 00:12:53.896 raid1 : 6.03 154.15 9.63 0.00 0.00 696693.29 19065.02 2303054.20 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x100 length 0x100 00:12:53.896 raid1 : 5.94 139.63 8.73 0.00 0.00 778734.84 15252.01 2318306.21 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x0 length 0x4e 00:12:53.896 AIO0 : 6.06 154.75 9.67 0.00 0.00 415383.64 1087.30 1319299.26 00:12:53.896 [2024-11-18T04:53:17.420Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:53.896 Verification LBA range: start 0x4e length 0x4e 00:12:53.896 AIO0 : 5.97 145.93 9.12 0.00 0.00 449118.01 878.78 1319299.26 00:12:53.896 [2024-11-18T04:53:17.420Z] =================================================================================================================== 00:12:53.896 [2024-11-18T04:53:17.420Z] Total : 3468.13 216.76 0.00 0.00 640992.83 878.78 2364062.25 00:12:55.802 00:12:55.802 real 0m9.813s 00:12:55.802 user 0m18.108s 00:12:55.802 sys 0m0.528s 00:12:55.802 04:53:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:55.802 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:55.802 ************************************ 00:12:55.802 END TEST bdev_verify_big_io 00:12:55.802 ************************************ 00:12:55.802 04:53:19 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:55.802 04:53:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:55.802 04:53:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.802 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:55.802 ************************************ 00:12:55.802 START TEST bdev_write_zeroes 00:12:55.802 ************************************ 00:12:55.802 04:53:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:55.802 [2024-11-18 04:53:19.272417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:55.802 [2024-11-18 04:53:19.272623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67565 ] 00:12:56.061 [2024-11-18 04:53:19.429691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.320 [2024-11-18 04:53:19.604759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.580 [2024-11-18 04:53:19.932978] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:56.580 [2024-11-18 04:53:19.933081] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:56.580 [2024-11-18 04:53:19.940944] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:56.580 [2024-11-18 04:53:19.941022] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:56.580 [2024-11-18 04:53:19.948965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:56.580 [2024-11-18 04:53:19.949021] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:56.580 [2024-11-18 04:53:19.949058] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:56.840 [2024-11-18 04:53:20.122397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:56.840 [2024-11-18 04:53:20.122493] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.840 [2024-11-18 04:53:20.122528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:56.840 [2024-11-18 04:53:20.122542] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.840 [2024-11-18 04:53:20.125056] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.840 [2024-11-18 04:53:20.125131] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:57.099 Running I/O for 1 seconds... 00:12:58.038 00:12:58.038 Latency(us) 00:12:58.038 [2024-11-18T04:53:21.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc0 : 1.05 5380.96 21.02 0.00 0.00 23773.38 577.16 38844.97 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc1p0 : 1.05 5373.68 20.99 0.00 0.00 23765.81 811.75 37891.72 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc1p1 : 1.05 5367.39 20.97 0.00 0.00 23748.94 748.45 37176.79 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc2p0 : 1.05 5361.44 20.94 0.00 0.00 23736.08 767.07 36461.85 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc2p1 : 1.05 5355.12 20.92 0.00 0.00 23713.99 770.79 35746.91 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc2p2 : 1.05 5348.73 20.89 0.00 0.00 23699.20 759.62 35031.97 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc2p3 : 1.05 5342.57 20.87 0.00 0.00 23685.22 781.96 34317.03 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc2p4 : 1.06 5336.26 20.84 0.00 0.00 23669.00 733.56 33602.09 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc2p5 : 1.06 5329.73 20.82 0.00 0.00 23655.62 785.69 32887.16 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc2p6 : 1.06 5323.58 20.80 0.00 0.00 23636.11 763.35 32172.22 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 Malloc2p7 : 1.06 5317.48 20.77 0.00 0.00 23624.29 808.03 31457.28 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 TestPT : 1.06 5310.95 20.75 0.00 0.00 23613.32 763.35 30742.34 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 raid0 : 1.06 5304.00 20.72 0.00 0.00 23584.62 1496.90 29193.31 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 concat0 : 1.06 5296.85 20.69 0.00 0.00 23538.56 1534.14 27644.28 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 raid1 : 1.07 5287.52 20.65 0.00 0.00 23484.24 2383.13 26214.40 00:12:58.038 [2024-11-18T04:53:21.562Z] Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:58.038 AIO0 : 1.07 5275.60 20.61 0.00 0.00 23423.89 1645.85 26095.24 00:12:58.038 [2024-11-18T04:53:21.562Z] =================================================================================================================== 00:12:58.038 [2024-11-18T04:53:21.562Z] Total : 85311.85 333.25 0.00 0.00 23647.04 577.16 38844.97 00:12:59.945 00:12:59.945 real 0m4.232s 00:12:59.945 user 0m3.737s 00:12:59.945 sys 0m0.346s 00:12:59.945 04:53:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.945 ************************************ 00:12:59.945 04:53:23 -- common/autotest_common.sh@10 -- # set +x 00:12:59.945 END TEST bdev_write_zeroes 00:12:59.945 ************************************ 00:13:00.204 04:53:23 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.204 04:53:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:00.204 04:53:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.204 04:53:23 -- common/autotest_common.sh@10 -- # set +x 00:13:00.204 ************************************ 00:13:00.204 START TEST bdev_json_nonenclosed 00:13:00.204 ************************************ 00:13:00.204 04:53:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.204 [2024-11-18 04:53:23.571645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:00.204 [2024-11-18 04:53:23.571819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67628 ] 00:13:00.463 [2024-11-18 04:53:23.742238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.463 [2024-11-18 04:53:23.919240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.463 [2024-11-18 04:53:23.919502] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:00.463 [2024-11-18 04:53:23.919528] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:01.032 00:13:01.032 real 0m0.804s 00:13:01.032 user 0m0.584s 00:13:01.032 sys 0m0.119s 00:13:01.032 04:53:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.032 04:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:01.032 ************************************ 00:13:01.032 END TEST bdev_json_nonenclosed 00:13:01.032 ************************************ 00:13:01.032 04:53:24 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.032 04:53:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:01.032 04:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.032 04:53:24 -- common/autotest_common.sh@10 -- # set +x 00:13:01.032 ************************************ 00:13:01.032 START TEST bdev_json_nonarray 00:13:01.032 ************************************ 00:13:01.032 04:53:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.032 [2024-11-18 04:53:24.428151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:01.032 [2024-11-18 04:53:24.428345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67655 ] 00:13:01.291 [2024-11-18 04:53:24.597858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.291 [2024-11-18 04:53:24.785747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.291 [2024-11-18 04:53:24.785976] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:01.292 [2024-11-18 04:53:24.786003] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:01.861 00:13:01.861 real 0m0.817s 00:13:01.861 user 0m0.586s 00:13:01.861 sys 0m0.130s 00:13:01.861 04:53:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.861 04:53:25 -- common/autotest_common.sh@10 -- # set +x 00:13:01.861 ************************************ 00:13:01.861 END TEST bdev_json_nonarray 00:13:01.861 ************************************ 00:13:01.861 04:53:25 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:01.861 04:53:25 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:01.861 04:53:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:01.861 04:53:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.861 04:53:25 -- common/autotest_common.sh@10 -- # set +x 00:13:01.861 ************************************ 00:13:01.861 START TEST bdev_qos 00:13:01.861 ************************************ 00:13:01.861 04:53:25 -- common/autotest_common.sh@1114 -- # qos_test_suite '' 00:13:01.861 04:53:25 -- bdev/blockdev.sh@444 -- # QOS_PID=67686 00:13:01.861 Process qos testing pid: 67686 00:13:01.861 04:53:25 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 67686' 00:13:01.861 04:53:25 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:01.861 04:53:25 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:01.861 04:53:25 -- bdev/blockdev.sh@447 -- # waitforlisten 67686 00:13:01.861 04:53:25 -- common/autotest_common.sh@829 -- # '[' -z 67686 ']' 00:13:01.861 04:53:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.861 04:53:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.861 04:53:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.861 04:53:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.861 04:53:25 -- common/autotest_common.sh@10 -- # set +x 00:13:01.861 [2024-11-18 04:53:25.295938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:01.862 [2024-11-18 04:53:25.296608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67686 ] 00:13:02.121 [2024-11-18 04:53:25.471293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.380 [2024-11-18 04:53:25.694875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.948 04:53:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.948 04:53:26 -- common/autotest_common.sh@862 -- # return 0 00:13:02.948 04:53:26 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:02.948 04:53:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.948 04:53:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.948 Malloc_0 00:13:02.948 04:53:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.948 04:53:26 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:02.948 04:53:26 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:13:02.948 04:53:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:02.948 04:53:26 -- common/autotest_common.sh@899 -- # local i 00:13:02.948 04:53:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:02.948 04:53:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:02.948 04:53:26 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:02.948 04:53:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.948 04:53:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.948 04:53:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.948 04:53:26 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:02.948 04:53:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.948 04:53:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.948 [ 00:13:02.948 { 00:13:02.948 "name": "Malloc_0", 00:13:02.948 "aliases": [ 00:13:02.948 "256b088b-3bca-49e5-ac76-654c2399d28b" 00:13:02.948 ], 00:13:02.948 "product_name": "Malloc disk", 00:13:02.948 "block_size": 512, 00:13:02.948 "num_blocks": 262144, 00:13:02.948 "uuid": "256b088b-3bca-49e5-ac76-654c2399d28b", 00:13:02.948 "assigned_rate_limits": { 00:13:02.948 "rw_ios_per_sec": 0, 00:13:02.948 "rw_mbytes_per_sec": 0, 00:13:02.948 "r_mbytes_per_sec": 0, 00:13:02.948 "w_mbytes_per_sec": 0 00:13:02.948 }, 00:13:02.948 "claimed": false, 00:13:02.948 "zoned": false, 00:13:02.948 "supported_io_types": { 00:13:02.948 "read": true, 00:13:02.948 "write": true, 00:13:02.948 "unmap": true, 00:13:02.948 "write_zeroes": true, 00:13:02.948 "flush": true, 00:13:02.948 "reset": true, 00:13:02.948 "compare": false, 00:13:02.948 "compare_and_write": false, 00:13:02.948 "abort": true, 00:13:02.948 "nvme_admin": false, 00:13:02.948 "nvme_io": false 00:13:02.948 }, 00:13:02.948 "memory_domains": [ 00:13:02.948 { 00:13:02.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.948 "dma_device_type": 2 00:13:02.948 } 00:13:02.948 ], 00:13:02.948 "driver_specific": {} 00:13:02.948 } 00:13:02.948 ] 00:13:02.948 04:53:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.948 04:53:26 -- common/autotest_common.sh@905 -- # return 0 00:13:02.948 04:53:26 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:02.948 04:53:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.948 04:53:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.948 Null_1 00:13:02.948 04:53:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.948 04:53:26 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:02.948 04:53:26 -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:13:02.948 04:53:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:02.948 04:53:26 -- common/autotest_common.sh@899 -- # local i 00:13:02.948 04:53:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:02.948 04:53:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:02.948 04:53:26 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:02.948 04:53:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.948 04:53:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.948 04:53:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.948 04:53:26 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:02.948 04:53:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.948 04:53:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.948 [ 00:13:02.948 { 00:13:02.948 "name": "Null_1", 00:13:02.948 "aliases": [ 00:13:02.948 "1bea562a-663d-4568-a4c3-c0d07ea8952c" 00:13:02.948 ], 00:13:02.948 "product_name": "Null disk", 00:13:02.948 "block_size": 512, 00:13:02.948 "num_blocks": 262144, 00:13:02.949 "uuid": "1bea562a-663d-4568-a4c3-c0d07ea8952c", 00:13:02.949 "assigned_rate_limits": { 00:13:02.949 "rw_ios_per_sec": 0, 00:13:02.949 "rw_mbytes_per_sec": 0, 00:13:02.949 "r_mbytes_per_sec": 0, 00:13:02.949 "w_mbytes_per_sec": 0 00:13:02.949 }, 00:13:02.949 "claimed": false, 00:13:02.949 "zoned": false, 00:13:02.949 "supported_io_types": { 00:13:02.949 "read": true, 00:13:02.949 "write": true, 00:13:02.949 "unmap": false, 00:13:02.949 "write_zeroes": true, 00:13:02.949 "flush": false, 00:13:02.949 "reset": true, 00:13:02.949 "compare": false, 00:13:02.949 "compare_and_write": false, 00:13:02.949 "abort": true, 00:13:02.949 "nvme_admin": false, 00:13:02.949 "nvme_io": false 00:13:02.949 }, 00:13:02.949 "driver_specific": {} 00:13:02.949 } 00:13:02.949 ] 00:13:02.949 04:53:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.949 04:53:26 -- common/autotest_common.sh@905 -- # return 0 00:13:02.949 04:53:26 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:02.949 04:53:26 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:02.949 04:53:26 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:02.949 04:53:26 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:02.949 04:53:26 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:02.949 04:53:26 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:02.949 04:53:26 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:02.949 04:53:26 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:02.949 04:53:26 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:02.949 04:53:26 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:02.949 04:53:26 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:02.949 04:53:26 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:02.949 04:53:26 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:02.949 04:53:26 -- bdev/blockdev.sh@376 -- # tail -1 00:13:03.208 Running I/O for 60 seconds... 00:13:08.481 04:53:31 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 65890.62 263562.49 0.00 0.00 265216.00 0.00 0.00 ' 00:13:08.481 04:53:31 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:08.481 04:53:31 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:08.481 04:53:31 -- bdev/blockdev.sh@378 -- # iostat_result=65890.62 00:13:08.481 04:53:31 -- bdev/blockdev.sh@383 -- # echo 65890 00:13:08.481 04:53:31 -- bdev/blockdev.sh@414 -- # io_result=65890 00:13:08.481 04:53:31 -- bdev/blockdev.sh@416 -- # iops_limit=16000 00:13:08.481 04:53:31 -- bdev/blockdev.sh@417 -- # '[' 16000 -gt 1000 ']' 00:13:08.481 04:53:31 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 16000 Malloc_0 00:13:08.481 04:53:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.481 04:53:31 -- common/autotest_common.sh@10 -- # set +x 00:13:08.481 04:53:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.481 04:53:31 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 16000 IOPS Malloc_0 00:13:08.481 04:53:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:08.481 04:53:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.481 04:53:31 -- common/autotest_common.sh@10 -- # set +x 00:13:08.481 ************************************ 00:13:08.481 START TEST bdev_qos_iops 00:13:08.481 ************************************ 00:13:08.481 04:53:31 -- common/autotest_common.sh@1114 -- # run_qos_test 16000 IOPS Malloc_0 00:13:08.481 04:53:31 -- bdev/blockdev.sh@387 -- # local qos_limit=16000 00:13:08.481 04:53:31 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:08.481 04:53:31 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:13:08.481 04:53:31 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:08.481 04:53:31 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:08.481 04:53:31 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:08.481 04:53:31 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:08.481 04:53:31 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:08.481 04:53:31 -- bdev/blockdev.sh@376 -- # tail -1 00:13:13.755 04:53:36 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 16021.01 64084.05 0.00 0.00 64960.00 0.00 0.00 ' 00:13:13.755 04:53:36 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:13.755 04:53:36 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:13.755 04:53:36 -- bdev/blockdev.sh@378 -- # iostat_result=16021.01 00:13:13.755 04:53:36 -- bdev/blockdev.sh@383 -- # echo 16021 00:13:13.755 04:53:36 -- bdev/blockdev.sh@390 -- # qos_result=16021 00:13:13.755 04:53:36 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:13:13.755 04:53:36 -- bdev/blockdev.sh@394 -- # lower_limit=14400 00:13:13.755 04:53:36 -- bdev/blockdev.sh@395 -- # upper_limit=17600 00:13:13.755 04:53:36 -- bdev/blockdev.sh@398 -- # '[' 16021 -lt 14400 ']' 00:13:13.755 04:53:36 -- bdev/blockdev.sh@398 -- # '[' 16021 -gt 17600 ']' 00:13:13.755 00:13:13.755 real 0m5.238s 00:13:13.755 user 0m0.125s 00:13:13.755 sys 0m0.039s 00:13:13.755 04:53:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.755 04:53:36 -- common/autotest_common.sh@10 -- # set +x 00:13:13.755 ************************************ 00:13:13.755 END TEST bdev_qos_iops 00:13:13.755 ************************************ 00:13:13.755 04:53:36 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:13:13.755 04:53:36 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:13.755 04:53:36 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:13.755 04:53:36 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:13.755 04:53:36 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:13.755 04:53:36 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:13.755 04:53:36 -- bdev/blockdev.sh@376 -- # tail -1 00:13:19.031 04:53:42 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 24320.94 97283.77 0.00 0.00 99328.00 0.00 0.00 ' 00:13:19.031 04:53:42 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:19.031 04:53:42 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:19.031 04:53:42 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:19.031 04:53:42 -- bdev/blockdev.sh@380 -- # iostat_result=99328.00 00:13:19.031 04:53:42 -- bdev/blockdev.sh@383 -- # echo 99328 00:13:19.031 04:53:42 -- bdev/blockdev.sh@425 -- # bw_limit=99328 00:13:19.031 04:53:42 -- bdev/blockdev.sh@426 -- # bw_limit=9 00:13:19.031 04:53:42 -- bdev/blockdev.sh@427 -- # '[' 9 -lt 2 ']' 00:13:19.031 04:53:42 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:13:19.031 04:53:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.031 04:53:42 -- common/autotest_common.sh@10 -- # set +x 00:13:19.031 04:53:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.031 04:53:42 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:13:19.031 04:53:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:19.031 04:53:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.031 04:53:42 -- common/autotest_common.sh@10 -- # set +x 00:13:19.031 ************************************ 00:13:19.031 START TEST bdev_qos_bw 00:13:19.031 ************************************ 00:13:19.031 04:53:42 -- common/autotest_common.sh@1114 -- # run_qos_test 9 BANDWIDTH Null_1 00:13:19.031 04:53:42 -- bdev/blockdev.sh@387 -- # local qos_limit=9 00:13:19.031 04:53:42 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:19.031 04:53:42 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:13:19.031 04:53:42 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:19.031 04:53:42 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:19.031 04:53:42 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:19.031 04:53:42 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:19.031 04:53:42 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:19.031 04:53:42 -- bdev/blockdev.sh@376 -- # tail -1 00:13:24.301 04:53:47 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2304.08 9216.32 0.00 0.00 9444.00 0.00 0.00 ' 00:13:24.301 04:53:47 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:24.301 04:53:47 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:24.301 04:53:47 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:24.301 04:53:47 -- bdev/blockdev.sh@380 -- # iostat_result=9444.00 00:13:24.301 04:53:47 -- bdev/blockdev.sh@383 -- # echo 9444 00:13:24.301 04:53:47 -- bdev/blockdev.sh@390 -- # qos_result=9444 00:13:24.301 04:53:47 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:24.301 04:53:47 -- bdev/blockdev.sh@392 -- # qos_limit=9216 00:13:24.301 04:53:47 -- bdev/blockdev.sh@394 -- # lower_limit=8294 00:13:24.301 04:53:47 -- bdev/blockdev.sh@395 -- # upper_limit=10137 00:13:24.301 04:53:47 -- bdev/blockdev.sh@398 -- # '[' 9444 -lt 8294 ']' 00:13:24.301 04:53:47 -- bdev/blockdev.sh@398 -- # '[' 9444 -gt 10137 ']' 00:13:24.301 00:13:24.301 real 0m5.265s 00:13:24.301 user 0m0.122s 00:13:24.301 sys 0m0.045s 00:13:24.301 04:53:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:24.302 ************************************ 00:13:24.302 END TEST bdev_qos_bw 00:13:24.302 04:53:47 -- common/autotest_common.sh@10 -- # set +x 00:13:24.302 ************************************ 00:13:24.302 04:53:47 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:24.302 04:53:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.302 04:53:47 -- common/autotest_common.sh@10 -- # set +x 00:13:24.302 04:53:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.302 04:53:47 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:24.302 04:53:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:24.302 04:53:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:24.302 04:53:47 -- common/autotest_common.sh@10 -- # set +x 00:13:24.302 ************************************ 00:13:24.302 START TEST bdev_qos_ro_bw 00:13:24.302 ************************************ 00:13:24.302 04:53:47 -- common/autotest_common.sh@1114 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:24.302 04:53:47 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:13:24.302 04:53:47 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:24.302 04:53:47 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:13:24.302 04:53:47 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:24.302 04:53:47 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:24.302 04:53:47 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:24.302 04:53:47 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:24.302 04:53:47 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:24.302 04:53:47 -- bdev/blockdev.sh@376 -- # tail -1 00:13:29.571 04:53:52 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.11 2044.43 0.00 0.00 2060.00 0.00 0.00 ' 00:13:29.571 04:53:52 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:29.571 04:53:52 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:29.571 04:53:52 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:29.571 04:53:52 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:13:29.571 04:53:52 -- bdev/blockdev.sh@383 -- # echo 2060 00:13:29.571 04:53:52 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:13:29.571 04:53:52 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:29.571 04:53:52 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:13:29.571 04:53:52 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:13:29.571 04:53:52 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:13:29.571 04:53:52 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:13:29.571 04:53:52 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:13:29.571 00:13:29.571 real 0m5.183s 00:13:29.571 user 0m0.121s 00:13:29.571 sys 0m0.036s 00:13:29.571 04:53:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:29.571 04:53:52 -- common/autotest_common.sh@10 -- # set +x 00:13:29.571 ************************************ 00:13:29.571 END TEST bdev_qos_ro_bw 00:13:29.571 ************************************ 00:13:29.571 04:53:52 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:29.571 04:53:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.571 04:53:52 -- common/autotest_common.sh@10 -- # set +x 00:13:30.139 04:53:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.139 04:53:53 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:13:30.139 04:53:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.139 04:53:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.139 00:13:30.139 Latency(us) 00:13:30.139 [2024-11-18T04:53:53.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.139 [2024-11-18T04:53:53.663Z] Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:30.139 Malloc_0 : 26.79 22176.36 86.63 0.00 0.00 11437.30 2189.50 503316.48 00:13:30.139 [2024-11-18T04:53:53.663Z] Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:30.139 Null_1 : 27.01 22874.40 89.35 0.00 0.00 11163.54 662.81 211621.70 00:13:30.139 [2024-11-18T04:53:53.663Z] =================================================================================================================== 00:13:30.139 [2024-11-18T04:53:53.663Z] Total : 45050.76 175.98 0.00 0.00 11297.75 662.81 503316.48 00:13:30.139 0 00:13:30.139 04:53:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.139 04:53:53 -- bdev/blockdev.sh@459 -- # killprocess 67686 00:13:30.139 04:53:53 -- common/autotest_common.sh@936 -- # '[' -z 67686 ']' 00:13:30.139 04:53:53 -- common/autotest_common.sh@940 -- # kill -0 67686 00:13:30.139 04:53:53 -- common/autotest_common.sh@941 -- # uname 00:13:30.139 04:53:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:30.139 04:53:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67686 00:13:30.139 04:53:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:30.139 killing process with pid 67686 00:13:30.139 Received shutdown signal, test time was about 27.049953 seconds 00:13:30.139 00:13:30.139 Latency(us) 00:13:30.139 [2024-11-18T04:53:53.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.139 [2024-11-18T04:53:53.663Z] =================================================================================================================== 00:13:30.139 [2024-11-18T04:53:53.663Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:30.139 04:53:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:30.139 04:53:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67686' 00:13:30.139 04:53:53 -- common/autotest_common.sh@955 -- # kill 67686 00:13:30.139 04:53:53 -- common/autotest_common.sh@960 -- # wait 67686 00:13:31.567 04:53:54 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:13:31.567 00:13:31.567 real 0m29.636s 00:13:31.567 user 0m30.445s 00:13:31.567 sys 0m0.735s 00:13:31.567 04:53:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:31.567 04:53:54 -- common/autotest_common.sh@10 -- # set +x 00:13:31.567 ************************************ 00:13:31.567 END TEST bdev_qos 00:13:31.567 ************************************ 00:13:31.567 04:53:54 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:31.567 04:53:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:31.567 04:53:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:31.567 04:53:54 -- common/autotest_common.sh@10 -- # set +x 00:13:31.567 ************************************ 00:13:31.567 START TEST bdev_qd_sampling 00:13:31.567 ************************************ 00:13:31.567 04:53:54 -- common/autotest_common.sh@1114 -- # qd_sampling_test_suite '' 00:13:31.567 04:53:54 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:13:31.567 04:53:54 -- bdev/blockdev.sh@539 -- # QD_PID=68104 00:13:31.567 04:53:54 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:31.567 Process bdev QD sampling period testing pid: 68104 00:13:31.567 04:53:54 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 68104' 00:13:31.567 04:53:54 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:31.567 04:53:54 -- bdev/blockdev.sh@542 -- # waitforlisten 68104 00:13:31.567 04:53:54 -- common/autotest_common.sh@829 -- # '[' -z 68104 ']' 00:13:31.567 04:53:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.568 04:53:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.568 04:53:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.568 04:53:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.568 04:53:54 -- common/autotest_common.sh@10 -- # set +x 00:13:31.568 [2024-11-18 04:53:54.988860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:31.568 [2024-11-18 04:53:54.989045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68104 ] 00:13:31.827 [2024-11-18 04:53:55.157699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:32.086 [2024-11-18 04:53:55.386231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.086 [2024-11-18 04:53:55.386240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.654 04:53:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.654 04:53:55 -- common/autotest_common.sh@862 -- # return 0 00:13:32.654 04:53:55 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:32.654 04:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.654 04:53:55 -- common/autotest_common.sh@10 -- # set +x 00:13:32.654 Malloc_QD 00:13:32.654 04:53:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.654 04:53:56 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:13:32.654 04:53:56 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:13:32.654 04:53:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:32.654 04:53:56 -- common/autotest_common.sh@899 -- # local i 00:13:32.654 04:53:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:32.654 04:53:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:32.654 04:53:56 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:32.654 04:53:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.654 04:53:56 -- common/autotest_common.sh@10 -- # set +x 00:13:32.654 04:53:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.654 04:53:56 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:32.654 04:53:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.654 04:53:56 -- common/autotest_common.sh@10 -- # set +x 00:13:32.654 [ 00:13:32.654 { 00:13:32.654 "name": "Malloc_QD", 00:13:32.654 "aliases": [ 00:13:32.654 "cac46203-34eb-4b73-9101-848006062cc4" 00:13:32.654 ], 00:13:32.654 "product_name": "Malloc disk", 00:13:32.654 "block_size": 512, 00:13:32.654 "num_blocks": 262144, 00:13:32.654 "uuid": "cac46203-34eb-4b73-9101-848006062cc4", 00:13:32.654 "assigned_rate_limits": { 00:13:32.654 "rw_ios_per_sec": 0, 00:13:32.654 "rw_mbytes_per_sec": 0, 00:13:32.654 "r_mbytes_per_sec": 0, 00:13:32.654 "w_mbytes_per_sec": 0 00:13:32.654 }, 00:13:32.654 "claimed": false, 00:13:32.654 "zoned": false, 00:13:32.654 "supported_io_types": { 00:13:32.654 "read": true, 00:13:32.654 "write": true, 00:13:32.654 "unmap": true, 00:13:32.654 "write_zeroes": true, 00:13:32.654 "flush": true, 00:13:32.654 "reset": true, 00:13:32.654 "compare": false, 00:13:32.654 "compare_and_write": false, 00:13:32.654 "abort": true, 00:13:32.654 "nvme_admin": false, 00:13:32.654 "nvme_io": false 00:13:32.654 }, 00:13:32.654 "memory_domains": [ 00:13:32.654 { 00:13:32.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.654 "dma_device_type": 2 00:13:32.654 } 00:13:32.654 ], 00:13:32.654 "driver_specific": {} 00:13:32.654 } 00:13:32.654 ] 00:13:32.654 04:53:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.654 04:53:56 -- common/autotest_common.sh@905 -- # return 0 00:13:32.654 04:53:56 -- bdev/blockdev.sh@548 -- # sleep 2 00:13:32.654 04:53:56 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:32.654 Running I/O for 5 seconds... 00:13:34.559 04:53:58 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:13:34.559 04:53:58 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:13:34.559 04:53:58 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:13:34.559 04:53:58 -- bdev/blockdev.sh@519 -- # local iostats 00:13:34.559 04:53:58 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:34.559 04:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.559 04:53:58 -- common/autotest_common.sh@10 -- # set +x 00:13:34.559 04:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.559 04:53:58 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:34.559 04:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.559 04:53:58 -- common/autotest_common.sh@10 -- # set +x 00:13:34.559 04:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.559 04:53:58 -- bdev/blockdev.sh@523 -- # iostats='{ 00:13:34.559 "tick_rate": 2200000000, 00:13:34.559 "ticks": 1728710141658, 00:13:34.559 "bdevs": [ 00:13:34.559 { 00:13:34.559 "name": "Malloc_QD", 00:13:34.559 "bytes_read": 831558144, 00:13:34.559 "num_read_ops": 203011, 00:13:34.559 "bytes_written": 0, 00:13:34.559 "num_write_ops": 0, 00:13:34.559 "bytes_unmapped": 0, 00:13:34.559 "num_unmap_ops": 0, 00:13:34.559 "bytes_copied": 0, 00:13:34.559 "num_copy_ops": 0, 00:13:34.559 "read_latency_ticks": 2128394578022, 00:13:34.559 "max_read_latency_ticks": 12274532, 00:13:34.559 "min_read_latency_ticks": 327650, 00:13:34.559 "write_latency_ticks": 0, 00:13:34.559 "max_write_latency_ticks": 0, 00:13:34.559 "min_write_latency_ticks": 0, 00:13:34.559 "unmap_latency_ticks": 0, 00:13:34.559 "max_unmap_latency_ticks": 0, 00:13:34.559 "min_unmap_latency_ticks": 0, 00:13:34.559 "copy_latency_ticks": 0, 00:13:34.559 "max_copy_latency_ticks": 0, 00:13:34.559 "min_copy_latency_ticks": 0, 00:13:34.559 "io_error": {}, 00:13:34.559 "queue_depth_polling_period": 10, 00:13:34.559 "queue_depth": 512, 00:13:34.559 "io_time": 20, 00:13:34.559 "weighted_io_time": 10240 00:13:34.559 } 00:13:34.559 ] 00:13:34.559 }' 00:13:34.559 04:53:58 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:34.559 04:53:58 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:13:34.559 04:53:58 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:13:34.559 04:53:58 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:13:34.559 04:53:58 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:34.559 04:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.559 04:53:58 -- common/autotest_common.sh@10 -- # set +x 00:13:34.818 00:13:34.818 Latency(us) 00:13:34.818 [2024-11-18T04:53:58.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.818 [2024-11-18T04:53:58.342Z] Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:34.818 Malloc_QD : 1.93 53032.45 207.16 0.00 0.00 4815.39 1288.38 5600.35 00:13:34.818 [2024-11-18T04:53:58.342Z] Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:34.818 Malloc_QD : 1.93 54188.13 211.67 0.00 0.00 4713.08 945.80 5213.09 00:13:34.818 [2024-11-18T04:53:58.342Z] =================================================================================================================== 00:13:34.818 [2024-11-18T04:53:58.342Z] Total : 107220.58 418.83 0.00 0.00 4763.66 945.80 5600.35 00:13:34.818 0 00:13:34.818 04:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.818 04:53:58 -- bdev/blockdev.sh@552 -- # killprocess 68104 00:13:34.818 04:53:58 -- common/autotest_common.sh@936 -- # '[' -z 68104 ']' 00:13:34.818 04:53:58 -- common/autotest_common.sh@940 -- # kill -0 68104 00:13:34.818 04:53:58 -- common/autotest_common.sh@941 -- # uname 00:13:34.818 04:53:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:34.818 04:53:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68104 00:13:34.818 04:53:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:34.818 killing process with pid 68104 00:13:34.818 04:53:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:34.818 04:53:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68104' 00:13:34.818 Received shutdown signal, test time was about 2.071706 seconds 00:13:34.818 00:13:34.818 Latency(us) 00:13:34.818 [2024-11-18T04:53:58.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.818 [2024-11-18T04:53:58.342Z] =================================================================================================================== 00:13:34.818 [2024-11-18T04:53:58.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:34.819 04:53:58 -- common/autotest_common.sh@955 -- # kill 68104 00:13:34.819 04:53:58 -- common/autotest_common.sh@960 -- # wait 68104 00:13:36.197 04:53:59 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:13:36.197 00:13:36.197 real 0m4.602s 00:13:36.197 user 0m8.495s 00:13:36.197 sys 0m0.371s 00:13:36.197 04:53:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:36.197 04:53:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.197 ************************************ 00:13:36.197 END TEST bdev_qd_sampling 00:13:36.197 ************************************ 00:13:36.197 04:53:59 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:13:36.197 04:53:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:36.197 04:53:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.197 04:53:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.197 ************************************ 00:13:36.197 START TEST bdev_error 00:13:36.197 ************************************ 00:13:36.197 04:53:59 -- common/autotest_common.sh@1114 -- # error_test_suite '' 00:13:36.197 04:53:59 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:13:36.197 04:53:59 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:13:36.197 04:53:59 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:13:36.197 04:53:59 -- bdev/blockdev.sh@470 -- # ERR_PID=68181 00:13:36.197 Process error testing pid: 68181 00:13:36.197 04:53:59 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 68181' 00:13:36.197 04:53:59 -- bdev/blockdev.sh@472 -- # waitforlisten 68181 00:13:36.197 04:53:59 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:36.197 04:53:59 -- common/autotest_common.sh@829 -- # '[' -z 68181 ']' 00:13:36.197 04:53:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.197 04:53:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.197 04:53:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.197 04:53:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.197 04:53:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.197 [2024-11-18 04:53:59.642517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:36.197 [2024-11-18 04:53:59.642698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68181 ] 00:13:36.456 [2024-11-18 04:53:59.811752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.715 [2024-11-18 04:53:59.984293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.284 04:54:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:37.284 04:54:00 -- common/autotest_common.sh@862 -- # return 0 00:13:37.284 04:54:00 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:37.284 04:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.284 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.284 Dev_1 00:13:37.284 04:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.284 04:54:00 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:13:37.284 04:54:00 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:37.284 04:54:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:37.284 04:54:00 -- common/autotest_common.sh@899 -- # local i 00:13:37.284 04:54:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:37.284 04:54:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:37.284 04:54:00 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:37.284 04:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.284 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.284 04:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.284 04:54:00 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:37.284 04:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.284 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.284 [ 00:13:37.284 { 00:13:37.284 "name": "Dev_1", 00:13:37.284 "aliases": [ 00:13:37.284 "800d1f28-1797-4661-8e4e-27912b34cab5" 00:13:37.284 ], 00:13:37.284 "product_name": "Malloc disk", 00:13:37.284 "block_size": 512, 00:13:37.284 "num_blocks": 262144, 00:13:37.284 "uuid": "800d1f28-1797-4661-8e4e-27912b34cab5", 00:13:37.284 "assigned_rate_limits": { 00:13:37.284 "rw_ios_per_sec": 0, 00:13:37.284 "rw_mbytes_per_sec": 0, 00:13:37.284 "r_mbytes_per_sec": 0, 00:13:37.284 "w_mbytes_per_sec": 0 00:13:37.284 }, 00:13:37.284 "claimed": false, 00:13:37.284 "zoned": false, 00:13:37.284 "supported_io_types": { 00:13:37.284 "read": true, 00:13:37.284 "write": true, 00:13:37.284 "unmap": true, 00:13:37.284 "write_zeroes": true, 00:13:37.284 "flush": true, 00:13:37.284 "reset": true, 00:13:37.284 "compare": false, 00:13:37.284 "compare_and_write": false, 00:13:37.284 "abort": true, 00:13:37.284 "nvme_admin": false, 00:13:37.284 "nvme_io": false 00:13:37.284 }, 00:13:37.284 "memory_domains": [ 00:13:37.284 { 00:13:37.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.284 "dma_device_type": 2 00:13:37.284 } 00:13:37.284 ], 00:13:37.284 "driver_specific": {} 00:13:37.284 } 00:13:37.284 ] 00:13:37.284 04:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.284 04:54:00 -- common/autotest_common.sh@905 -- # return 0 00:13:37.284 04:54:00 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:13:37.284 04:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.284 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.284 true 00:13:37.284 04:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.284 04:54:00 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:37.284 04:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.284 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.543 Dev_2 00:13:37.543 04:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.543 04:54:00 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:13:37.543 04:54:00 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:37.543 04:54:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:37.543 04:54:00 -- common/autotest_common.sh@899 -- # local i 00:13:37.543 04:54:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:37.543 04:54:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:37.543 04:54:00 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:37.543 04:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.543 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.543 04:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.543 04:54:00 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:37.543 04:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.543 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.543 [ 00:13:37.543 { 00:13:37.543 "name": "Dev_2", 00:13:37.543 "aliases": [ 00:13:37.543 "4d77505f-7dfd-4626-8f0f-2cd37f8eac9f" 00:13:37.543 ], 00:13:37.543 "product_name": "Malloc disk", 00:13:37.543 "block_size": 512, 00:13:37.543 "num_blocks": 262144, 00:13:37.543 "uuid": "4d77505f-7dfd-4626-8f0f-2cd37f8eac9f", 00:13:37.543 "assigned_rate_limits": { 00:13:37.543 "rw_ios_per_sec": 0, 00:13:37.543 "rw_mbytes_per_sec": 0, 00:13:37.543 "r_mbytes_per_sec": 0, 00:13:37.543 "w_mbytes_per_sec": 0 00:13:37.543 }, 00:13:37.543 "claimed": false, 00:13:37.543 "zoned": false, 00:13:37.543 "supported_io_types": { 00:13:37.543 "read": true, 00:13:37.543 "write": true, 00:13:37.543 "unmap": true, 00:13:37.543 "write_zeroes": true, 00:13:37.543 "flush": true, 00:13:37.543 "reset": true, 00:13:37.543 "compare": false, 00:13:37.543 "compare_and_write": false, 00:13:37.543 "abort": true, 00:13:37.543 "nvme_admin": false, 00:13:37.543 "nvme_io": false 00:13:37.543 }, 00:13:37.543 "memory_domains": [ 00:13:37.543 { 00:13:37.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.543 "dma_device_type": 2 00:13:37.543 } 00:13:37.543 ], 00:13:37.543 "driver_specific": {} 00:13:37.543 } 00:13:37.543 ] 00:13:37.543 04:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.543 04:54:00 -- common/autotest_common.sh@905 -- # return 0 00:13:37.543 04:54:00 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:37.543 04:54:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.543 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.544 04:54:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.544 04:54:00 -- bdev/blockdev.sh@482 -- # sleep 1 00:13:37.544 04:54:00 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:37.544 Running I/O for 5 seconds... 00:13:38.481 04:54:01 -- bdev/blockdev.sh@485 -- # kill -0 68181 00:13:38.481 Process is existed as continue on error is set. Pid: 68181 00:13:38.481 04:54:01 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 68181' 00:13:38.481 04:54:01 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:38.481 04:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.481 04:54:01 -- common/autotest_common.sh@10 -- # set +x 00:13:38.481 04:54:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.481 04:54:01 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:38.481 04:54:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.481 04:54:01 -- common/autotest_common.sh@10 -- # set +x 00:13:38.481 Timeout while waiting for response: 00:13:38.481 00:13:38.481 00:13:38.740 04:54:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.740 04:54:02 -- bdev/blockdev.sh@495 -- # sleep 5 00:13:42.933 00:13:42.933 Latency(us) 00:13:42.933 [2024-11-18T04:54:06.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.933 [2024-11-18T04:54:06.457Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:42.933 EE_Dev_1 : 0.91 36799.82 143.75 5.50 0.00 431.54 141.50 774.52 00:13:42.933 [2024-11-18T04:54:06.457Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:42.933 Dev_2 : 5.00 74037.66 289.21 0.00 0.00 212.98 84.25 306946.79 00:13:42.933 [2024-11-18T04:54:06.457Z] =================================================================================================================== 00:13:42.933 [2024-11-18T04:54:06.457Z] Total : 110837.48 432.96 5.50 0.00 231.09 84.25 306946.79 00:13:43.869 04:54:07 -- bdev/blockdev.sh@497 -- # killprocess 68181 00:13:43.869 04:54:07 -- common/autotest_common.sh@936 -- # '[' -z 68181 ']' 00:13:43.869 04:54:07 -- common/autotest_common.sh@940 -- # kill -0 68181 00:13:43.869 04:54:07 -- common/autotest_common.sh@941 -- # uname 00:13:43.869 04:54:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:43.869 04:54:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68181 00:13:43.869 04:54:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:43.869 04:54:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:43.869 killing process with pid 68181 00:13:43.869 04:54:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68181' 00:13:43.869 04:54:07 -- common/autotest_common.sh@955 -- # kill 68181 00:13:43.869 Received shutdown signal, test time was about 5.000000 seconds 00:13:43.869 00:13:43.869 Latency(us) 00:13:43.869 [2024-11-18T04:54:07.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.869 [2024-11-18T04:54:07.393Z] =================================================================================================================== 00:13:43.869 [2024-11-18T04:54:07.393Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:43.869 04:54:07 -- common/autotest_common.sh@960 -- # wait 68181 00:13:45.244 04:54:08 -- bdev/blockdev.sh@501 -- # ERR_PID=68288 00:13:45.244 Process error testing pid: 68288 00:13:45.244 04:54:08 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:45.244 04:54:08 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 68288' 00:13:45.244 04:54:08 -- bdev/blockdev.sh@503 -- # waitforlisten 68288 00:13:45.244 04:54:08 -- common/autotest_common.sh@829 -- # '[' -z 68288 ']' 00:13:45.244 04:54:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.244 04:54:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.244 04:54:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.244 04:54:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.244 04:54:08 -- common/autotest_common.sh@10 -- # set +x 00:13:45.244 [2024-11-18 04:54:08.660673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:45.244 [2024-11-18 04:54:08.660850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68288 ] 00:13:45.503 [2024-11-18 04:54:08.833666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.503 [2024-11-18 04:54:09.011140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.070 04:54:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.070 04:54:09 -- common/autotest_common.sh@862 -- # return 0 00:13:46.070 04:54:09 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:46.070 04:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.070 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.328 Dev_1 00:13:46.328 04:54:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.328 04:54:09 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:13:46.328 04:54:09 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:46.328 04:54:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:46.328 04:54:09 -- common/autotest_common.sh@899 -- # local i 00:13:46.328 04:54:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:46.328 04:54:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:46.328 04:54:09 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:46.328 04:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.328 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.328 04:54:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.329 04:54:09 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:46.329 04:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.329 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.329 [ 00:13:46.329 { 00:13:46.329 "name": "Dev_1", 00:13:46.329 "aliases": [ 00:13:46.329 "26f593f2-0801-454b-90dc-36469ec58e15" 00:13:46.329 ], 00:13:46.329 "product_name": "Malloc disk", 00:13:46.329 "block_size": 512, 00:13:46.329 "num_blocks": 262144, 00:13:46.329 "uuid": "26f593f2-0801-454b-90dc-36469ec58e15", 00:13:46.329 "assigned_rate_limits": { 00:13:46.329 "rw_ios_per_sec": 0, 00:13:46.329 "rw_mbytes_per_sec": 0, 00:13:46.329 "r_mbytes_per_sec": 0, 00:13:46.329 "w_mbytes_per_sec": 0 00:13:46.329 }, 00:13:46.329 "claimed": false, 00:13:46.329 "zoned": false, 00:13:46.329 "supported_io_types": { 00:13:46.329 "read": true, 00:13:46.329 "write": true, 00:13:46.329 "unmap": true, 00:13:46.329 "write_zeroes": true, 00:13:46.329 "flush": true, 00:13:46.329 "reset": true, 00:13:46.329 "compare": false, 00:13:46.329 "compare_and_write": false, 00:13:46.329 "abort": true, 00:13:46.329 "nvme_admin": false, 00:13:46.329 "nvme_io": false 00:13:46.329 }, 00:13:46.329 "memory_domains": [ 00:13:46.329 { 00:13:46.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.329 "dma_device_type": 2 00:13:46.329 } 00:13:46.329 ], 00:13:46.329 "driver_specific": {} 00:13:46.329 } 00:13:46.329 ] 00:13:46.329 04:54:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.329 04:54:09 -- common/autotest_common.sh@905 -- # return 0 00:13:46.329 04:54:09 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:13:46.329 04:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.329 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.329 true 00:13:46.329 04:54:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.329 04:54:09 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:46.329 04:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.329 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.587 Dev_2 00:13:46.587 04:54:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.587 04:54:09 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:13:46.587 04:54:09 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:46.587 04:54:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:46.587 04:54:09 -- common/autotest_common.sh@899 -- # local i 00:13:46.587 04:54:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:46.587 04:54:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:46.587 04:54:09 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:46.587 04:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.587 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.587 04:54:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.587 04:54:09 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:46.587 04:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.587 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.587 [ 00:13:46.587 { 00:13:46.587 "name": "Dev_2", 00:13:46.587 "aliases": [ 00:13:46.587 "0b02a28e-73f0-4ee4-a708-72743812dd14" 00:13:46.587 ], 00:13:46.587 "product_name": "Malloc disk", 00:13:46.587 "block_size": 512, 00:13:46.587 "num_blocks": 262144, 00:13:46.587 "uuid": "0b02a28e-73f0-4ee4-a708-72743812dd14", 00:13:46.587 "assigned_rate_limits": { 00:13:46.587 "rw_ios_per_sec": 0, 00:13:46.587 "rw_mbytes_per_sec": 0, 00:13:46.587 "r_mbytes_per_sec": 0, 00:13:46.587 "w_mbytes_per_sec": 0 00:13:46.587 }, 00:13:46.587 "claimed": false, 00:13:46.587 "zoned": false, 00:13:46.587 "supported_io_types": { 00:13:46.587 "read": true, 00:13:46.587 "write": true, 00:13:46.587 "unmap": true, 00:13:46.587 "write_zeroes": true, 00:13:46.587 "flush": true, 00:13:46.587 "reset": true, 00:13:46.588 "compare": false, 00:13:46.588 "compare_and_write": false, 00:13:46.588 "abort": true, 00:13:46.588 "nvme_admin": false, 00:13:46.588 "nvme_io": false 00:13:46.588 }, 00:13:46.588 "memory_domains": [ 00:13:46.588 { 00:13:46.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.588 "dma_device_type": 2 00:13:46.588 } 00:13:46.588 ], 00:13:46.588 "driver_specific": {} 00:13:46.588 } 00:13:46.588 ] 00:13:46.588 04:54:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.588 04:54:09 -- common/autotest_common.sh@905 -- # return 0 00:13:46.588 04:54:09 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:46.588 04:54:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.588 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.588 04:54:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.588 04:54:09 -- bdev/blockdev.sh@513 -- # NOT wait 68288 00:13:46.588 04:54:09 -- common/autotest_common.sh@650 -- # local es=0 00:13:46.588 04:54:09 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:46.588 04:54:09 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 68288 00:13:46.588 04:54:09 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:46.588 04:54:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.588 04:54:09 -- common/autotest_common.sh@642 -- # type -t wait 00:13:46.588 04:54:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.588 04:54:09 -- common/autotest_common.sh@653 -- # wait 68288 00:13:46.588 Running I/O for 5 seconds... 00:13:46.588 task offset: 259536 on job bdev=EE_Dev_1 fails 00:13:46.588 00:13:46.588 Latency(us) 00:13:46.588 [2024-11-18T04:54:10.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.588 [2024-11-18T04:54:10.112Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:46.588 [2024-11-18T04:54:10.112Z] Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:46.588 EE_Dev_1 : 0.00 24802.71 96.89 5636.98 0.00 430.79 163.84 778.24 00:13:46.588 [2024-11-18T04:54:10.112Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:46.588 Dev_2 : 0.00 17139.80 66.95 0.00 0.00 651.78 141.50 1199.01 00:13:46.588 [2024-11-18T04:54:10.112Z] =================================================================================================================== 00:13:46.588 [2024-11-18T04:54:10.112Z] Total : 41942.50 163.84 5636.98 0.00 550.65 141.50 1199.01 00:13:46.588 [2024-11-18 04:54:10.046102] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:46.588 request: 00:13:46.588 { 00:13:46.588 "method": "perform_tests", 00:13:46.588 "req_id": 1 00:13:46.588 } 00:13:46.588 Got JSON-RPC error response 00:13:46.588 response: 00:13:46.588 { 00:13:46.588 "code": -32603, 00:13:46.588 "message": "bdevperf failed with error Operation not permitted" 00:13:46.588 } 00:13:48.491 04:54:11 -- common/autotest_common.sh@653 -- # es=255 00:13:48.491 04:54:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:48.491 04:54:11 -- common/autotest_common.sh@662 -- # es=127 00:13:48.491 04:54:11 -- common/autotest_common.sh@663 -- # case "$es" in 00:13:48.491 04:54:11 -- common/autotest_common.sh@670 -- # es=1 00:13:48.491 04:54:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:48.491 00:13:48.491 real 0m12.155s 00:13:48.491 user 0m12.429s 00:13:48.491 sys 0m0.812s 00:13:48.491 04:54:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:48.491 04:54:11 -- common/autotest_common.sh@10 -- # set +x 00:13:48.491 ************************************ 00:13:48.491 END TEST bdev_error 00:13:48.491 ************************************ 00:13:48.491 04:54:11 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:13:48.491 04:54:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:48.491 04:54:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.491 04:54:11 -- common/autotest_common.sh@10 -- # set +x 00:13:48.491 ************************************ 00:13:48.491 START TEST bdev_stat 00:13:48.491 ************************************ 00:13:48.491 04:54:11 -- common/autotest_common.sh@1114 -- # stat_test_suite '' 00:13:48.491 04:54:11 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:13:48.491 04:54:11 -- bdev/blockdev.sh@594 -- # STAT_PID=68346 00:13:48.491 Process Bdev IO statistics testing pid: 68346 00:13:48.491 04:54:11 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 68346' 00:13:48.491 04:54:11 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:48.491 04:54:11 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:48.491 04:54:11 -- bdev/blockdev.sh@597 -- # waitforlisten 68346 00:13:48.491 04:54:11 -- common/autotest_common.sh@829 -- # '[' -z 68346 ']' 00:13:48.491 04:54:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.491 04:54:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.491 04:54:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.491 04:54:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.491 04:54:11 -- common/autotest_common.sh@10 -- # set +x 00:13:48.491 [2024-11-18 04:54:11.858134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:48.492 [2024-11-18 04:54:11.858367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68346 ] 00:13:48.750 [2024-11-18 04:54:12.033895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:48.750 [2024-11-18 04:54:12.257069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.750 [2024-11-18 04:54:12.257080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.319 04:54:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.319 04:54:12 -- common/autotest_common.sh@862 -- # return 0 00:13:49.319 04:54:12 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:49.319 04:54:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.319 04:54:12 -- common/autotest_common.sh@10 -- # set +x 00:13:49.578 Malloc_STAT 00:13:49.578 04:54:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.578 04:54:12 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:13:49.578 04:54:12 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:13:49.578 04:54:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:49.578 04:54:12 -- common/autotest_common.sh@899 -- # local i 00:13:49.578 04:54:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:49.578 04:54:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:49.578 04:54:12 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:49.578 04:54:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.578 04:54:12 -- common/autotest_common.sh@10 -- # set +x 00:13:49.578 04:54:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.578 04:54:12 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:49.578 04:54:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.578 04:54:12 -- common/autotest_common.sh@10 -- # set +x 00:13:49.578 [ 00:13:49.578 { 00:13:49.578 "name": "Malloc_STAT", 00:13:49.578 "aliases": [ 00:13:49.578 "6ee6823e-a2b8-4315-9eb3-16c184af5a1f" 00:13:49.578 ], 00:13:49.578 "product_name": "Malloc disk", 00:13:49.578 "block_size": 512, 00:13:49.578 "num_blocks": 262144, 00:13:49.578 "uuid": "6ee6823e-a2b8-4315-9eb3-16c184af5a1f", 00:13:49.578 "assigned_rate_limits": { 00:13:49.578 "rw_ios_per_sec": 0, 00:13:49.578 "rw_mbytes_per_sec": 0, 00:13:49.578 "r_mbytes_per_sec": 0, 00:13:49.578 "w_mbytes_per_sec": 0 00:13:49.578 }, 00:13:49.578 "claimed": false, 00:13:49.578 "zoned": false, 00:13:49.578 "supported_io_types": { 00:13:49.578 "read": true, 00:13:49.578 "write": true, 00:13:49.578 "unmap": true, 00:13:49.578 "write_zeroes": true, 00:13:49.578 "flush": true, 00:13:49.578 "reset": true, 00:13:49.578 "compare": false, 00:13:49.578 "compare_and_write": false, 00:13:49.578 "abort": true, 00:13:49.578 "nvme_admin": false, 00:13:49.578 "nvme_io": false 00:13:49.578 }, 00:13:49.578 "memory_domains": [ 00:13:49.578 { 00:13:49.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.578 "dma_device_type": 2 00:13:49.578 } 00:13:49.578 ], 00:13:49.578 "driver_specific": {} 00:13:49.578 } 00:13:49.578 ] 00:13:49.578 04:54:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.578 04:54:12 -- common/autotest_common.sh@905 -- # return 0 00:13:49.578 04:54:12 -- bdev/blockdev.sh@603 -- # sleep 2 00:13:49.578 04:54:12 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:49.838 Running I/O for 10 seconds... 00:13:51.745 04:54:14 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:13:51.745 04:54:14 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:13:51.745 04:54:14 -- bdev/blockdev.sh@558 -- # local iostats 00:13:51.745 04:54:14 -- bdev/blockdev.sh@559 -- # local io_count1 00:13:51.745 04:54:14 -- bdev/blockdev.sh@560 -- # local io_count2 00:13:51.745 04:54:14 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:13:51.745 04:54:14 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:13:51.745 04:54:14 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:13:51.745 04:54:14 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:13:51.745 04:54:14 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:51.745 04:54:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.745 04:54:14 -- common/autotest_common.sh@10 -- # set +x 00:13:51.745 04:54:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.745 04:54:15 -- bdev/blockdev.sh@566 -- # iostats='{ 00:13:51.745 "tick_rate": 2200000000, 00:13:51.745 "ticks": 1765999210386, 00:13:51.745 "bdevs": [ 00:13:51.745 { 00:13:51.745 "name": "Malloc_STAT", 00:13:51.745 "bytes_read": 813732352, 00:13:51.745 "num_read_ops": 198659, 00:13:51.745 "bytes_written": 0, 00:13:51.745 "num_write_ops": 0, 00:13:51.745 "bytes_unmapped": 0, 00:13:51.745 "num_unmap_ops": 0, 00:13:51.745 "bytes_copied": 0, 00:13:51.745 "num_copy_ops": 0, 00:13:51.745 "read_latency_ticks": 2128840775727, 00:13:51.745 "max_read_latency_ticks": 13686306, 00:13:51.745 "min_read_latency_ticks": 312440, 00:13:51.745 "write_latency_ticks": 0, 00:13:51.745 "max_write_latency_ticks": 0, 00:13:51.745 "min_write_latency_ticks": 0, 00:13:51.745 "unmap_latency_ticks": 0, 00:13:51.745 "max_unmap_latency_ticks": 0, 00:13:51.745 "min_unmap_latency_ticks": 0, 00:13:51.745 "copy_latency_ticks": 0, 00:13:51.745 "max_copy_latency_ticks": 0, 00:13:51.745 "min_copy_latency_ticks": 0, 00:13:51.745 "io_error": {} 00:13:51.745 } 00:13:51.745 ] 00:13:51.745 }' 00:13:51.745 04:54:15 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:13:51.745 04:54:15 -- bdev/blockdev.sh@567 -- # io_count1=198659 00:13:51.745 04:54:15 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:51.745 04:54:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.745 04:54:15 -- common/autotest_common.sh@10 -- # set +x 00:13:51.745 04:54:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.745 04:54:15 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:13:51.745 "tick_rate": 2200000000, 00:13:51.745 "ticks": 1766066274198, 00:13:51.745 "name": "Malloc_STAT", 00:13:51.745 "channels": [ 00:13:51.745 { 00:13:51.745 "thread_id": 2, 00:13:51.745 "bytes_read": 411041792, 00:13:51.745 "num_read_ops": 100352, 00:13:51.745 "bytes_written": 0, 00:13:51.745 "num_write_ops": 0, 00:13:51.745 "bytes_unmapped": 0, 00:13:51.745 "num_unmap_ops": 0, 00:13:51.745 "bytes_copied": 0, 00:13:51.745 "num_copy_ops": 0, 00:13:51.745 "read_latency_ticks": 1080644913249, 00:13:51.745 "max_read_latency_ticks": 13686306, 00:13:51.745 "min_read_latency_ticks": 8168650, 00:13:51.745 "write_latency_ticks": 0, 00:13:51.745 "max_write_latency_ticks": 0, 00:13:51.745 "min_write_latency_ticks": 0, 00:13:51.745 "unmap_latency_ticks": 0, 00:13:51.745 "max_unmap_latency_ticks": 0, 00:13:51.745 "min_unmap_latency_ticks": 0, 00:13:51.745 "copy_latency_ticks": 0, 00:13:51.745 "max_copy_latency_ticks": 0, 00:13:51.745 "min_copy_latency_ticks": 0 00:13:51.745 }, 00:13:51.745 { 00:13:51.745 "thread_id": 3, 00:13:51.745 "bytes_read": 415236096, 00:13:51.745 "num_read_ops": 101376, 00:13:51.745 "bytes_written": 0, 00:13:51.745 "num_write_ops": 0, 00:13:51.745 "bytes_unmapped": 0, 00:13:51.745 "num_unmap_ops": 0, 00:13:51.745 "bytes_copied": 0, 00:13:51.745 "num_copy_ops": 0, 00:13:51.745 "read_latency_ticks": 1081932721446, 00:13:51.745 "max_read_latency_ticks": 12292072, 00:13:51.745 "min_read_latency_ticks": 8243003, 00:13:51.745 "write_latency_ticks": 0, 00:13:51.745 "max_write_latency_ticks": 0, 00:13:51.745 "min_write_latency_ticks": 0, 00:13:51.745 "unmap_latency_ticks": 0, 00:13:51.745 "max_unmap_latency_ticks": 0, 00:13:51.745 "min_unmap_latency_ticks": 0, 00:13:51.745 "copy_latency_ticks": 0, 00:13:51.745 "max_copy_latency_ticks": 0, 00:13:51.745 "min_copy_latency_ticks": 0 00:13:51.745 } 00:13:51.745 ] 00:13:51.745 }' 00:13:51.745 04:54:15 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:13:51.745 04:54:15 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=100352 00:13:51.745 04:54:15 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=100352 00:13:51.745 04:54:15 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:13:51.745 04:54:15 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=101376 00:13:51.745 04:54:15 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=201728 00:13:51.745 04:54:15 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:51.745 04:54:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.745 04:54:15 -- common/autotest_common.sh@10 -- # set +x 00:13:51.745 04:54:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.745 04:54:15 -- bdev/blockdev.sh@575 -- # iostats='{ 00:13:51.745 "tick_rate": 2200000000, 00:13:51.745 "ticks": 1766177416053, 00:13:51.745 "bdevs": [ 00:13:51.745 { 00:13:51.745 "name": "Malloc_STAT", 00:13:51.745 "bytes_read": 847286784, 00:13:51.745 "num_read_ops": 206851, 00:13:51.745 "bytes_written": 0, 00:13:51.745 "num_write_ops": 0, 00:13:51.745 "bytes_unmapped": 0, 00:13:51.745 "num_unmap_ops": 0, 00:13:51.745 "bytes_copied": 0, 00:13:51.745 "num_copy_ops": 0, 00:13:51.745 "read_latency_ticks": 2219230508824, 00:13:51.745 "max_read_latency_ticks": 13686306, 00:13:51.745 "min_read_latency_ticks": 312440, 00:13:51.745 "write_latency_ticks": 0, 00:13:51.746 "max_write_latency_ticks": 0, 00:13:51.746 "min_write_latency_ticks": 0, 00:13:51.746 "unmap_latency_ticks": 0, 00:13:51.746 "max_unmap_latency_ticks": 0, 00:13:51.746 "min_unmap_latency_ticks": 0, 00:13:51.746 "copy_latency_ticks": 0, 00:13:51.746 "max_copy_latency_ticks": 0, 00:13:51.746 "min_copy_latency_ticks": 0, 00:13:51.746 "io_error": {} 00:13:51.746 } 00:13:51.746 ] 00:13:51.746 }' 00:13:51.746 04:54:15 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:13:51.746 04:54:15 -- bdev/blockdev.sh@576 -- # io_count2=206851 00:13:51.746 04:54:15 -- bdev/blockdev.sh@581 -- # '[' 201728 -lt 198659 ']' 00:13:51.746 04:54:15 -- bdev/blockdev.sh@581 -- # '[' 201728 -gt 206851 ']' 00:13:51.746 04:54:15 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:51.746 04:54:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.746 04:54:15 -- common/autotest_common.sh@10 -- # set +x 00:13:51.746 00:13:51.746 Latency(us) 00:13:51.746 [2024-11-18T04:54:15.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.746 [2024-11-18T04:54:15.270Z] Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:51.746 Malloc_STAT : 2.01 52129.43 203.63 0.00 0.00 4898.74 1392.64 6225.92 00:13:51.746 [2024-11-18T04:54:15.270Z] Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:51.746 Malloc_STAT : 2.01 52629.72 205.58 0.00 0.00 4852.60 1206.46 5600.35 00:13:51.746 [2024-11-18T04:54:15.270Z] =================================================================================================================== 00:13:51.746 [2024-11-18T04:54:15.270Z] Total : 104759.15 409.22 0.00 0.00 4875.56 1206.46 6225.92 00:13:51.746 0 00:13:51.746 04:54:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.746 04:54:15 -- bdev/blockdev.sh@607 -- # killprocess 68346 00:13:51.746 04:54:15 -- common/autotest_common.sh@936 -- # '[' -z 68346 ']' 00:13:51.746 04:54:15 -- common/autotest_common.sh@940 -- # kill -0 68346 00:13:51.746 04:54:15 -- common/autotest_common.sh@941 -- # uname 00:13:51.746 04:54:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:51.746 04:54:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68346 00:13:51.746 04:54:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:51.746 04:54:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:51.746 killing process with pid 68346 00:13:51.746 04:54:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68346' 00:13:51.746 Received shutdown signal, test time was about 2.159336 seconds 00:13:51.746 00:13:51.746 Latency(us) 00:13:51.746 [2024-11-18T04:54:15.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.746 [2024-11-18T04:54:15.270Z] =================================================================================================================== 00:13:51.746 [2024-11-18T04:54:15.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:51.746 04:54:15 -- common/autotest_common.sh@955 -- # kill 68346 00:13:51.746 04:54:15 -- common/autotest_common.sh@960 -- # wait 68346 00:13:53.124 04:54:16 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:13:53.124 00:13:53.124 real 0m4.802s 00:13:53.124 user 0m9.002s 00:13:53.124 sys 0m0.401s 00:13:53.124 04:54:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:53.124 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:13:53.124 ************************************ 00:13:53.124 END TEST bdev_stat 00:13:53.124 ************************************ 00:13:53.124 04:54:16 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:13:53.124 04:54:16 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:13:53.124 04:54:16 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:53.124 04:54:16 -- bdev/blockdev.sh@809 -- # cleanup 00:13:53.124 04:54:16 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:53.383 04:54:16 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:53.383 04:54:16 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:13:53.383 04:54:16 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:13:53.383 04:54:16 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:13:53.383 04:54:16 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:13:53.383 00:13:53.383 real 2m23.010s 00:13:53.383 user 5m54.789s 00:13:53.383 sys 0m20.698s 00:13:53.383 04:54:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:53.383 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:13:53.384 ************************************ 00:13:53.384 END TEST blockdev_general 00:13:53.384 ************************************ 00:13:53.384 04:54:16 -- spdk/autotest.sh@183 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:53.384 04:54:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:53.384 04:54:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.384 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:13:53.384 ************************************ 00:13:53.384 START TEST bdev_raid 00:13:53.384 ************************************ 00:13:53.384 04:54:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:53.384 * Looking for test storage... 00:13:53.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:53.384 04:54:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:53.384 04:54:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:53.384 04:54:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:53.384 04:54:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:53.384 04:54:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:53.384 04:54:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:53.384 04:54:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:53.384 04:54:16 -- scripts/common.sh@335 -- # IFS=.-: 00:13:53.384 04:54:16 -- scripts/common.sh@335 -- # read -ra ver1 00:13:53.384 04:54:16 -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.384 04:54:16 -- scripts/common.sh@336 -- # read -ra ver2 00:13:53.384 04:54:16 -- scripts/common.sh@337 -- # local 'op=<' 00:13:53.384 04:54:16 -- scripts/common.sh@339 -- # ver1_l=2 00:13:53.384 04:54:16 -- scripts/common.sh@340 -- # ver2_l=1 00:13:53.384 04:54:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:53.384 04:54:16 -- scripts/common.sh@343 -- # case "$op" in 00:13:53.384 04:54:16 -- scripts/common.sh@344 -- # : 1 00:13:53.384 04:54:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:53.384 04:54:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.384 04:54:16 -- scripts/common.sh@364 -- # decimal 1 00:13:53.384 04:54:16 -- scripts/common.sh@352 -- # local d=1 00:13:53.384 04:54:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.384 04:54:16 -- scripts/common.sh@354 -- # echo 1 00:13:53.384 04:54:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:53.384 04:54:16 -- scripts/common.sh@365 -- # decimal 2 00:13:53.384 04:54:16 -- scripts/common.sh@352 -- # local d=2 00:13:53.384 04:54:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.384 04:54:16 -- scripts/common.sh@354 -- # echo 2 00:13:53.384 04:54:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:53.384 04:54:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:53.384 04:54:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:53.384 04:54:16 -- scripts/common.sh@367 -- # return 0 00:13:53.384 04:54:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.384 04:54:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:53.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.384 --rc genhtml_branch_coverage=1 00:13:53.384 --rc genhtml_function_coverage=1 00:13:53.384 --rc genhtml_legend=1 00:13:53.384 --rc geninfo_all_blocks=1 00:13:53.384 --rc geninfo_unexecuted_blocks=1 00:13:53.384 00:13:53.384 ' 00:13:53.384 04:54:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:53.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.384 --rc genhtml_branch_coverage=1 00:13:53.384 --rc genhtml_function_coverage=1 00:13:53.384 --rc genhtml_legend=1 00:13:53.384 --rc geninfo_all_blocks=1 00:13:53.384 --rc geninfo_unexecuted_blocks=1 00:13:53.384 00:13:53.384 ' 00:13:53.384 04:54:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:53.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.384 --rc genhtml_branch_coverage=1 00:13:53.384 --rc genhtml_function_coverage=1 00:13:53.384 --rc genhtml_legend=1 00:13:53.384 --rc geninfo_all_blocks=1 00:13:53.384 --rc geninfo_unexecuted_blocks=1 00:13:53.384 00:13:53.384 ' 00:13:53.384 04:54:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:53.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.384 --rc genhtml_branch_coverage=1 00:13:53.384 --rc genhtml_function_coverage=1 00:13:53.384 --rc genhtml_legend=1 00:13:53.384 --rc geninfo_all_blocks=1 00:13:53.384 --rc geninfo_unexecuted_blocks=1 00:13:53.384 00:13:53.384 ' 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:53.384 04:54:16 -- bdev/nbd_common.sh@6 -- # set -e 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@716 -- # uname -s 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:53.384 04:54:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:53.384 04:54:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.384 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:13:53.384 ************************************ 00:13:53.384 START TEST raid_function_test_raid0 00:13:53.384 ************************************ 00:13:53.384 04:54:16 -- common/autotest_common.sh@1114 -- # raid_function_test raid0 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@86 -- # raid_pid=68496 00:13:53.384 Process raid pid: 68496 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 68496' 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@88 -- # waitforlisten 68496 /var/tmp/spdk-raid.sock 00:13:53.384 04:54:16 -- common/autotest_common.sh@829 -- # '[' -z 68496 ']' 00:13:53.384 04:54:16 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:53.384 04:54:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:53.384 04:54:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:53.384 04:54:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:53.384 04:54:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.384 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:13:53.644 [2024-11-18 04:54:16.957045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:53.644 [2024-11-18 04:54:16.957281] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.644 [2024-11-18 04:54:17.130399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.903 [2024-11-18 04:54:17.318506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.162 [2024-11-18 04:54:17.495805] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.421 04:54:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.421 04:54:17 -- common/autotest_common.sh@862 -- # return 0 00:13:54.421 04:54:17 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:13:54.421 04:54:17 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:13:54.421 04:54:17 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:54.421 04:54:17 -- bdev/bdev_raid.sh@70 -- # cat 00:13:54.421 04:54:17 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:54.990 [2024-11-18 04:54:18.203782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:54.990 [2024-11-18 04:54:18.206012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:54.990 [2024-11-18 04:54:18.206128] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:13:54.990 [2024-11-18 04:54:18.206148] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:54.990 [2024-11-18 04:54:18.206326] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:13:54.990 [2024-11-18 04:54:18.206906] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:13:54.990 [2024-11-18 04:54:18.206934] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006f80 00:13:54.990 [2024-11-18 04:54:18.207148] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.990 Base_1 00:13:54.990 Base_2 00:13:54.990 04:54:18 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:54.990 04:54:18 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:54.990 04:54:18 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:54.990 04:54:18 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:54.990 04:54:18 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:54.990 04:54:18 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:54.990 04:54:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:54.990 04:54:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:54.990 04:54:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.990 04:54:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:54.990 04:54:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.990 04:54:18 -- bdev/nbd_common.sh@12 -- # local i 00:13:54.990 04:54:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.990 04:54:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.990 04:54:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:55.249 [2024-11-18 04:54:18.712037] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:13:55.249 /dev/nbd0 00:13:55.249 04:54:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:55.249 04:54:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:55.249 04:54:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:55.249 04:54:18 -- common/autotest_common.sh@867 -- # local i 00:13:55.249 04:54:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:55.249 04:54:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:55.249 04:54:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:55.249 04:54:18 -- common/autotest_common.sh@871 -- # break 00:13:55.249 04:54:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:55.249 04:54:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:55.249 04:54:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.249 1+0 records in 00:13:55.249 1+0 records out 00:13:55.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363974 s, 11.3 MB/s 00:13:55.249 04:54:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.249 04:54:18 -- common/autotest_common.sh@884 -- # size=4096 00:13:55.249 04:54:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.249 04:54:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:55.249 04:54:18 -- common/autotest_common.sh@887 -- # return 0 00:13:55.249 04:54:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.249 04:54:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.249 04:54:18 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:55.249 04:54:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:55.249 04:54:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:55.822 04:54:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:55.822 { 00:13:55.822 "nbd_device": "/dev/nbd0", 00:13:55.822 "bdev_name": "raid" 00:13:55.822 } 00:13:55.822 ]' 00:13:55.822 04:54:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:55.822 { 00:13:55.822 "nbd_device": "/dev/nbd0", 00:13:55.822 "bdev_name": "raid" 00:13:55.822 } 00:13:55.822 ]' 00:13:55.822 04:54:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:55.822 04:54:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:55.822 04:54:19 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:55.822 04:54:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:55.822 04:54:19 -- bdev/nbd_common.sh@65 -- # count=1 00:13:55.822 04:54:19 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:55.822 4096+0 records in 00:13:55.822 4096+0 records out 00:13:55.822 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0187821 s, 112 MB/s 00:13:55.822 04:54:19 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:56.083 4096+0 records in 00:13:56.083 4096+0 records out 00:13:56.083 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.326551 s, 6.4 MB/s 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:56.083 128+0 records in 00:13:56.083 128+0 records out 00:13:56.083 65536 bytes (66 kB, 64 KiB) copied, 0.000578927 s, 113 MB/s 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:56.083 2035+0 records in 00:13:56.083 2035+0 records out 00:13:56.083 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0075038 s, 139 MB/s 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:56.083 456+0 records in 00:13:56.083 456+0 records out 00:13:56.083 233472 bytes (233 kB, 228 KiB) copied, 0.00102468 s, 228 MB/s 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:56.083 04:54:19 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:56.083 04:54:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:56.083 04:54:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:56.083 04:54:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:56.083 04:54:19 -- bdev/nbd_common.sh@51 -- # local i 00:13:56.083 04:54:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.083 04:54:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:56.342 04:54:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.342 [2024-11-18 04:54:19.785122] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.342 04:54:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.342 04:54:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.342 04:54:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.342 04:54:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.342 04:54:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.342 04:54:19 -- bdev/nbd_common.sh@41 -- # break 00:13:56.342 04:54:19 -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.342 04:54:19 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:56.342 04:54:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:56.342 04:54:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:56.601 04:54:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:56.601 04:54:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:56.601 04:54:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:56.601 04:54:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:56.601 04:54:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:56.601 04:54:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:56.601 04:54:20 -- bdev/nbd_common.sh@65 -- # true 00:13:56.601 04:54:20 -- bdev/nbd_common.sh@65 -- # count=0 00:13:56.601 04:54:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:56.601 04:54:20 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:56.601 04:54:20 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:56.601 04:54:20 -- bdev/bdev_raid.sh@111 -- # killprocess 68496 00:13:56.601 04:54:20 -- common/autotest_common.sh@936 -- # '[' -z 68496 ']' 00:13:56.601 04:54:20 -- common/autotest_common.sh@940 -- # kill -0 68496 00:13:56.601 04:54:20 -- common/autotest_common.sh@941 -- # uname 00:13:56.601 04:54:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:56.601 04:54:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68496 00:13:56.601 04:54:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:56.601 04:54:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:56.601 04:54:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68496' 00:13:56.601 killing process with pid 68496 00:13:56.601 04:54:20 -- common/autotest_common.sh@955 -- # kill 68496 00:13:56.601 [2024-11-18 04:54:20.102258] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.601 04:54:20 -- common/autotest_common.sh@960 -- # wait 68496 00:13:56.601 [2024-11-18 04:54:20.102388] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.601 [2024-11-18 04:54:20.102452] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.601 [2024-11-18 04:54:20.102471] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name raid, state offline 00:13:56.860 [2024-11-18 04:54:20.269988] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.239 04:54:21 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:58.239 00:13:58.239 real 0m4.461s 00:13:58.239 user 0m5.671s 00:13:58.239 sys 0m0.984s 00:13:58.239 04:54:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:58.239 ************************************ 00:13:58.239 END TEST raid_function_test_raid0 00:13:58.239 ************************************ 00:13:58.239 04:54:21 -- common/autotest_common.sh@10 -- # set +x 00:13:58.239 04:54:21 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:13:58.239 04:54:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:58.239 04:54:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:58.239 04:54:21 -- common/autotest_common.sh@10 -- # set +x 00:13:58.239 ************************************ 00:13:58.239 START TEST raid_function_test_concat 00:13:58.239 ************************************ 00:13:58.239 04:54:21 -- common/autotest_common.sh@1114 -- # raid_function_test concat 00:13:58.239 04:54:21 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:13:58.239 04:54:21 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:58.239 04:54:21 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:58.239 04:54:21 -- bdev/bdev_raid.sh@86 -- # raid_pid=68646 00:13:58.239 Process raid pid: 68646 00:13:58.239 04:54:21 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 68646' 00:13:58.239 04:54:21 -- bdev/bdev_raid.sh@88 -- # waitforlisten 68646 /var/tmp/spdk-raid.sock 00:13:58.239 04:54:21 -- common/autotest_common.sh@829 -- # '[' -z 68646 ']' 00:13:58.239 04:54:21 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:58.239 04:54:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:58.239 04:54:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:58.239 04:54:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:58.239 04:54:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.239 04:54:21 -- common/autotest_common.sh@10 -- # set +x 00:13:58.239 [2024-11-18 04:54:21.467398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:58.239 [2024-11-18 04:54:21.467562] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.239 [2024-11-18 04:54:21.642113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.499 [2024-11-18 04:54:21.823228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.499 [2024-11-18 04:54:21.995739] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.067 04:54:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.067 04:54:22 -- common/autotest_common.sh@862 -- # return 0 00:13:59.067 04:54:22 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:13:59.067 04:54:22 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:13:59.067 04:54:22 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:59.067 04:54:22 -- bdev/bdev_raid.sh@70 -- # cat 00:13:59.067 04:54:22 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:59.326 [2024-11-18 04:54:22.749870] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:59.326 [2024-11-18 04:54:22.751945] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:59.326 [2024-11-18 04:54:22.752049] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:13:59.326 [2024-11-18 04:54:22.752066] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:59.326 [2024-11-18 04:54:22.752185] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:13:59.326 [2024-11-18 04:54:22.752597] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:13:59.326 [2024-11-18 04:54:22.752625] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006f80 00:13:59.326 [2024-11-18 04:54:22.752835] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.326 Base_1 00:13:59.326 Base_2 00:13:59.326 04:54:22 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:59.326 04:54:22 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:59.326 04:54:22 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:59.585 04:54:22 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:59.585 04:54:22 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:59.585 04:54:22 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:59.585 04:54:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:59.585 04:54:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:59.585 04:54:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:59.585 04:54:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:59.585 04:54:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:59.585 04:54:22 -- bdev/nbd_common.sh@12 -- # local i 00:13:59.585 04:54:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:59.585 04:54:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.585 04:54:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:59.845 [2024-11-18 04:54:23.218060] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:13:59.845 /dev/nbd0 00:13:59.845 04:54:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:59.845 04:54:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:59.845 04:54:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:59.845 04:54:23 -- common/autotest_common.sh@867 -- # local i 00:13:59.845 04:54:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:59.845 04:54:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:59.845 04:54:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:59.845 04:54:23 -- common/autotest_common.sh@871 -- # break 00:13:59.845 04:54:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:59.845 04:54:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:59.845 04:54:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.845 1+0 records in 00:13:59.845 1+0 records out 00:13:59.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276752 s, 14.8 MB/s 00:13:59.845 04:54:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.845 04:54:23 -- common/autotest_common.sh@884 -- # size=4096 00:13:59.845 04:54:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.845 04:54:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:59.845 04:54:23 -- common/autotest_common.sh@887 -- # return 0 00:13:59.845 04:54:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.845 04:54:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.845 04:54:23 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:59.845 04:54:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:59.845 04:54:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:00.104 04:54:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:00.104 { 00:14:00.104 "nbd_device": "/dev/nbd0", 00:14:00.104 "bdev_name": "raid" 00:14:00.104 } 00:14:00.104 ]' 00:14:00.104 04:54:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:00.104 { 00:14:00.104 "nbd_device": "/dev/nbd0", 00:14:00.104 "bdev_name": "raid" 00:14:00.104 } 00:14:00.104 ]' 00:14:00.104 04:54:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:00.104 04:54:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:00.104 04:54:23 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:00.104 04:54:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:00.104 04:54:23 -- bdev/nbd_common.sh@65 -- # count=1 00:14:00.104 04:54:23 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:00.104 4096+0 records in 00:14:00.104 4096+0 records out 00:14:00.104 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0218015 s, 96.2 MB/s 00:14:00.104 04:54:23 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:00.672 4096+0 records in 00:14:00.672 4096+0 records out 00:14:00.672 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.310777 s, 6.7 MB/s 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:00.672 128+0 records in 00:14:00.672 128+0 records out 00:14:00.672 65536 bytes (66 kB, 64 KiB) copied, 0.000352217 s, 186 MB/s 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:00.672 2035+0 records in 00:14:00.672 2035+0 records out 00:14:00.672 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00617533 s, 169 MB/s 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:00.672 456+0 records in 00:14:00.672 456+0 records out 00:14:00.672 233472 bytes (233 kB, 228 KiB) copied, 0.0012511 s, 187 MB/s 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:00.672 04:54:23 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:00.672 04:54:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:00.672 04:54:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:00.672 04:54:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.672 04:54:23 -- bdev/nbd_common.sh@51 -- # local i 00:14:00.672 04:54:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.673 04:54:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:00.931 [2024-11-18 04:54:24.250224] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.931 04:54:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.931 04:54:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.931 04:54:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.931 04:54:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.931 04:54:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.931 04:54:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.931 04:54:24 -- bdev/nbd_common.sh@41 -- # break 00:14:00.931 04:54:24 -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.931 04:54:24 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:00.931 04:54:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:00.931 04:54:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:01.190 04:54:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:01.190 04:54:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:01.190 04:54:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:01.190 04:54:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:01.190 04:54:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:01.190 04:54:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:01.190 04:54:24 -- bdev/nbd_common.sh@65 -- # true 00:14:01.190 04:54:24 -- bdev/nbd_common.sh@65 -- # count=0 00:14:01.190 04:54:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:01.190 04:54:24 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:01.190 04:54:24 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:01.190 04:54:24 -- bdev/bdev_raid.sh@111 -- # killprocess 68646 00:14:01.190 04:54:24 -- common/autotest_common.sh@936 -- # '[' -z 68646 ']' 00:14:01.190 04:54:24 -- common/autotest_common.sh@940 -- # kill -0 68646 00:14:01.190 04:54:24 -- common/autotest_common.sh@941 -- # uname 00:14:01.190 04:54:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:01.190 04:54:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68646 00:14:01.190 04:54:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:01.190 04:54:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:01.190 killing process with pid 68646 00:14:01.190 04:54:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68646' 00:14:01.190 04:54:24 -- common/autotest_common.sh@955 -- # kill 68646 00:14:01.190 [2024-11-18 04:54:24.561739] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.190 04:54:24 -- common/autotest_common.sh@960 -- # wait 68646 00:14:01.190 [2024-11-18 04:54:24.561857] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.190 [2024-11-18 04:54:24.561922] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.190 [2024-11-18 04:54:24.561940] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name raid, state offline 00:14:01.448 [2024-11-18 04:54:24.722882] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:02.385 00:14:02.385 real 0m4.427s 00:14:02.385 user 0m5.643s 00:14:02.385 sys 0m0.937s 00:14:02.385 04:54:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:02.385 04:54:25 -- common/autotest_common.sh@10 -- # set +x 00:14:02.385 ************************************ 00:14:02.385 END TEST raid_function_test_concat 00:14:02.385 ************************************ 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:02.385 04:54:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:02.385 04:54:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:02.385 04:54:25 -- common/autotest_common.sh@10 -- # set +x 00:14:02.385 ************************************ 00:14:02.385 START TEST raid0_resize_test 00:14:02.385 ************************************ 00:14:02.385 04:54:25 -- common/autotest_common.sh@1114 -- # raid0_resize_test 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@301 -- # raid_pid=68789 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 68789' 00:14:02.385 Process raid pid: 68789 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@303 -- # waitforlisten 68789 /var/tmp/spdk-raid.sock 00:14:02.385 04:54:25 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:02.385 04:54:25 -- common/autotest_common.sh@829 -- # '[' -z 68789 ']' 00:14:02.385 04:54:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:02.385 04:54:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:02.386 04:54:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:02.386 04:54:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.386 04:54:25 -- common/autotest_common.sh@10 -- # set +x 00:14:02.645 [2024-11-18 04:54:25.934592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:02.645 [2024-11-18 04:54:25.934747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.645 [2024-11-18 04:54:26.095615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.904 [2024-11-18 04:54:26.278636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.162 [2024-11-18 04:54:26.457140] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.421 04:54:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.421 04:54:26 -- common/autotest_common.sh@862 -- # return 0 00:14:03.421 04:54:26 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:03.680 Base_1 00:14:03.680 04:54:27 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:03.940 Base_2 00:14:03.940 04:54:27 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:04.199 [2024-11-18 04:54:27.518716] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:04.199 [2024-11-18 04:54:27.521026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:04.199 [2024-11-18 04:54:27.521114] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:04.199 [2024-11-18 04:54:27.521134] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:04.199 [2024-11-18 04:54:27.521310] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005450 00:14:04.199 [2024-11-18 04:54:27.521707] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:04.199 [2024-11-18 04:54:27.521723] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000006f80 00:14:04.199 [2024-11-18 04:54:27.521926] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.199 04:54:27 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:04.458 [2024-11-18 04:54:27.778747] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:04.458 [2024-11-18 04:54:27.778801] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:04.458 true 00:14:04.458 04:54:27 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:04.458 04:54:27 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:04.717 [2024-11-18 04:54:27.990928] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.717 04:54:28 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:04.717 04:54:28 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:04.717 04:54:28 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:04.717 04:54:28 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:04.717 [2024-11-18 04:54:28.206870] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:04.717 [2024-11-18 04:54:28.206929] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:04.717 [2024-11-18 04:54:28.207001] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:04.717 [2024-11-18 04:54:28.207044] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:04.717 true 00:14:04.718 04:54:28 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:04.718 04:54:28 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:04.977 [2024-11-18 04:54:28.463146] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.977 04:54:28 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:04.977 04:54:28 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:04.977 04:54:28 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:04.977 04:54:28 -- bdev/bdev_raid.sh@332 -- # killprocess 68789 00:14:04.977 04:54:28 -- common/autotest_common.sh@936 -- # '[' -z 68789 ']' 00:14:04.977 04:54:28 -- common/autotest_common.sh@940 -- # kill -0 68789 00:14:04.977 04:54:28 -- common/autotest_common.sh@941 -- # uname 00:14:04.977 04:54:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:04.977 04:54:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68789 00:14:05.236 04:54:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:05.236 killing process with pid 68789 00:14:05.236 04:54:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:05.236 04:54:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68789' 00:14:05.236 04:54:28 -- common/autotest_common.sh@955 -- # kill 68789 00:14:05.236 [2024-11-18 04:54:28.514410] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.236 04:54:28 -- common/autotest_common.sh@960 -- # wait 68789 00:14:05.236 [2024-11-18 04:54:28.514499] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.236 [2024-11-18 04:54:28.514556] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.236 [2024-11-18 04:54:28.514589] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Raid, state offline 00:14:05.236 [2024-11-18 04:54:28.515331] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.179 04:54:29 -- bdev/bdev_raid.sh@334 -- # return 0 00:14:06.179 00:14:06.179 real 0m3.745s 00:14:06.179 user 0m5.301s 00:14:06.179 sys 0m0.435s 00:14:06.179 ************************************ 00:14:06.179 END TEST raid0_resize_test 00:14:06.179 ************************************ 00:14:06.179 04:54:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:06.179 04:54:29 -- common/autotest_common.sh@10 -- # set +x 00:14:06.179 04:54:29 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:06.179 04:54:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:06.180 04:54:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:06.180 04:54:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.180 04:54:29 -- common/autotest_common.sh@10 -- # set +x 00:14:06.180 ************************************ 00:14:06.180 START TEST raid_state_function_test 00:14:06.180 ************************************ 00:14:06.180 04:54:29 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 false 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=68869 00:14:06.180 Process raid pid: 68869 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 68869' 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 68869 /var/tmp/spdk-raid.sock 00:14:06.180 04:54:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:06.180 04:54:29 -- common/autotest_common.sh@829 -- # '[' -z 68869 ']' 00:14:06.180 04:54:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:06.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:06.180 04:54:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.180 04:54:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:06.180 04:54:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.180 04:54:29 -- common/autotest_common.sh@10 -- # set +x 00:14:06.438 [2024-11-18 04:54:29.749852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:06.438 [2024-11-18 04:54:29.750025] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.438 [2024-11-18 04:54:29.915591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.697 [2024-11-18 04:54:30.097680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.956 [2024-11-18 04:54:30.280159] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.216 04:54:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.216 04:54:30 -- common/autotest_common.sh@862 -- # return 0 00:14:07.216 04:54:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:07.475 [2024-11-18 04:54:30.892560] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.475 [2024-11-18 04:54:30.892658] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.475 [2024-11-18 04:54:30.892674] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.475 [2024-11-18 04:54:30.892690] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.475 04:54:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.734 04:54:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:07.734 "name": "Existed_Raid", 00:14:07.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.734 "strip_size_kb": 64, 00:14:07.734 "state": "configuring", 00:14:07.734 "raid_level": "raid0", 00:14:07.734 "superblock": false, 00:14:07.734 "num_base_bdevs": 2, 00:14:07.734 "num_base_bdevs_discovered": 0, 00:14:07.734 "num_base_bdevs_operational": 2, 00:14:07.734 "base_bdevs_list": [ 00:14:07.734 { 00:14:07.734 "name": "BaseBdev1", 00:14:07.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.734 "is_configured": false, 00:14:07.734 "data_offset": 0, 00:14:07.734 "data_size": 0 00:14:07.734 }, 00:14:07.734 { 00:14:07.734 "name": "BaseBdev2", 00:14:07.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.734 "is_configured": false, 00:14:07.734 "data_offset": 0, 00:14:07.734 "data_size": 0 00:14:07.734 } 00:14:07.734 ] 00:14:07.734 }' 00:14:07.734 04:54:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:07.734 04:54:31 -- common/autotest_common.sh@10 -- # set +x 00:14:07.993 04:54:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:08.252 [2024-11-18 04:54:31.696703] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.252 [2024-11-18 04:54:31.696766] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:08.252 04:54:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:08.511 [2024-11-18 04:54:31.948823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.511 [2024-11-18 04:54:31.949224] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.511 [2024-11-18 04:54:31.949375] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.511 [2024-11-18 04:54:31.949412] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.511 04:54:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.770 [2024-11-18 04:54:32.244540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.770 BaseBdev1 00:14:08.770 04:54:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:08.770 04:54:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:08.770 04:54:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:08.770 04:54:32 -- common/autotest_common.sh@899 -- # local i 00:14:08.770 04:54:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:08.770 04:54:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:08.770 04:54:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.029 04:54:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.328 [ 00:14:09.328 { 00:14:09.328 "name": "BaseBdev1", 00:14:09.328 "aliases": [ 00:14:09.328 "f1742491-546c-4168-baf0-c0f0c82642da" 00:14:09.328 ], 00:14:09.328 "product_name": "Malloc disk", 00:14:09.328 "block_size": 512, 00:14:09.328 "num_blocks": 65536, 00:14:09.328 "uuid": "f1742491-546c-4168-baf0-c0f0c82642da", 00:14:09.328 "assigned_rate_limits": { 00:14:09.328 "rw_ios_per_sec": 0, 00:14:09.328 "rw_mbytes_per_sec": 0, 00:14:09.328 "r_mbytes_per_sec": 0, 00:14:09.328 "w_mbytes_per_sec": 0 00:14:09.328 }, 00:14:09.328 "claimed": true, 00:14:09.328 "claim_type": "exclusive_write", 00:14:09.328 "zoned": false, 00:14:09.328 "supported_io_types": { 00:14:09.328 "read": true, 00:14:09.328 "write": true, 00:14:09.328 "unmap": true, 00:14:09.328 "write_zeroes": true, 00:14:09.328 "flush": true, 00:14:09.328 "reset": true, 00:14:09.328 "compare": false, 00:14:09.328 "compare_and_write": false, 00:14:09.328 "abort": true, 00:14:09.328 "nvme_admin": false, 00:14:09.328 "nvme_io": false 00:14:09.328 }, 00:14:09.328 "memory_domains": [ 00:14:09.328 { 00:14:09.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.328 "dma_device_type": 2 00:14:09.328 } 00:14:09.328 ], 00:14:09.328 "driver_specific": {} 00:14:09.328 } 00:14:09.328 ] 00:14:09.328 04:54:32 -- common/autotest_common.sh@905 -- # return 0 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.328 04:54:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.588 04:54:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.588 "name": "Existed_Raid", 00:14:09.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.588 "strip_size_kb": 64, 00:14:09.588 "state": "configuring", 00:14:09.588 "raid_level": "raid0", 00:14:09.588 "superblock": false, 00:14:09.588 "num_base_bdevs": 2, 00:14:09.588 "num_base_bdevs_discovered": 1, 00:14:09.588 "num_base_bdevs_operational": 2, 00:14:09.588 "base_bdevs_list": [ 00:14:09.588 { 00:14:09.588 "name": "BaseBdev1", 00:14:09.588 "uuid": "f1742491-546c-4168-baf0-c0f0c82642da", 00:14:09.588 "is_configured": true, 00:14:09.588 "data_offset": 0, 00:14:09.588 "data_size": 65536 00:14:09.588 }, 00:14:09.588 { 00:14:09.588 "name": "BaseBdev2", 00:14:09.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.588 "is_configured": false, 00:14:09.588 "data_offset": 0, 00:14:09.588 "data_size": 0 00:14:09.588 } 00:14:09.588 ] 00:14:09.588 }' 00:14:09.588 04:54:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.588 04:54:33 -- common/autotest_common.sh@10 -- # set +x 00:14:10.156 04:54:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:10.156 [2024-11-18 04:54:33.637249] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.156 [2024-11-18 04:54:33.637375] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:10.156 04:54:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:10.156 04:54:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:10.415 [2024-11-18 04:54:33.913393] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.415 [2024-11-18 04:54:33.915911] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.415 [2024-11-18 04:54:33.916143] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.415 04:54:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.674 04:54:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.933 04:54:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.933 "name": "Existed_Raid", 00:14:10.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.933 "strip_size_kb": 64, 00:14:10.933 "state": "configuring", 00:14:10.933 "raid_level": "raid0", 00:14:10.933 "superblock": false, 00:14:10.933 "num_base_bdevs": 2, 00:14:10.933 "num_base_bdevs_discovered": 1, 00:14:10.933 "num_base_bdevs_operational": 2, 00:14:10.933 "base_bdevs_list": [ 00:14:10.933 { 00:14:10.933 "name": "BaseBdev1", 00:14:10.933 "uuid": "f1742491-546c-4168-baf0-c0f0c82642da", 00:14:10.933 "is_configured": true, 00:14:10.933 "data_offset": 0, 00:14:10.933 "data_size": 65536 00:14:10.933 }, 00:14:10.933 { 00:14:10.933 "name": "BaseBdev2", 00:14:10.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.933 "is_configured": false, 00:14:10.933 "data_offset": 0, 00:14:10.933 "data_size": 0 00:14:10.933 } 00:14:10.933 ] 00:14:10.933 }' 00:14:10.933 04:54:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.933 04:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:11.193 04:54:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:11.452 [2024-11-18 04:54:34.799539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.452 [2024-11-18 04:54:34.799863] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:11.452 [2024-11-18 04:54:34.799890] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:11.452 [2024-11-18 04:54:34.800027] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:11.452 [2024-11-18 04:54:34.800471] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:11.452 [2024-11-18 04:54:34.800494] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:14:11.452 [2024-11-18 04:54:34.800796] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.452 BaseBdev2 00:14:11.452 04:54:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:11.452 04:54:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:11.452 04:54:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:11.452 04:54:34 -- common/autotest_common.sh@899 -- # local i 00:14:11.452 04:54:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:11.452 04:54:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:11.452 04:54:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.712 04:54:35 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:11.971 [ 00:14:11.971 { 00:14:11.971 "name": "BaseBdev2", 00:14:11.971 "aliases": [ 00:14:11.971 "c25240e4-320a-4239-935c-19fd5eb89cb6" 00:14:11.971 ], 00:14:11.971 "product_name": "Malloc disk", 00:14:11.971 "block_size": 512, 00:14:11.971 "num_blocks": 65536, 00:14:11.971 "uuid": "c25240e4-320a-4239-935c-19fd5eb89cb6", 00:14:11.971 "assigned_rate_limits": { 00:14:11.971 "rw_ios_per_sec": 0, 00:14:11.971 "rw_mbytes_per_sec": 0, 00:14:11.971 "r_mbytes_per_sec": 0, 00:14:11.971 "w_mbytes_per_sec": 0 00:14:11.971 }, 00:14:11.971 "claimed": true, 00:14:11.971 "claim_type": "exclusive_write", 00:14:11.971 "zoned": false, 00:14:11.971 "supported_io_types": { 00:14:11.971 "read": true, 00:14:11.971 "write": true, 00:14:11.971 "unmap": true, 00:14:11.971 "write_zeroes": true, 00:14:11.971 "flush": true, 00:14:11.971 "reset": true, 00:14:11.971 "compare": false, 00:14:11.971 "compare_and_write": false, 00:14:11.971 "abort": true, 00:14:11.971 "nvme_admin": false, 00:14:11.971 "nvme_io": false 00:14:11.971 }, 00:14:11.971 "memory_domains": [ 00:14:11.971 { 00:14:11.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.971 "dma_device_type": 2 00:14:11.971 } 00:14:11.971 ], 00:14:11.971 "driver_specific": {} 00:14:11.971 } 00:14:11.971 ] 00:14:11.971 04:54:35 -- common/autotest_common.sh@905 -- # return 0 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.971 04:54:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.261 04:54:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.261 "name": "Existed_Raid", 00:14:12.261 "uuid": "8d2141c2-d82a-4c38-97ed-f4e988705484", 00:14:12.261 "strip_size_kb": 64, 00:14:12.261 "state": "online", 00:14:12.261 "raid_level": "raid0", 00:14:12.261 "superblock": false, 00:14:12.261 "num_base_bdevs": 2, 00:14:12.261 "num_base_bdevs_discovered": 2, 00:14:12.261 "num_base_bdevs_operational": 2, 00:14:12.261 "base_bdevs_list": [ 00:14:12.261 { 00:14:12.261 "name": "BaseBdev1", 00:14:12.261 "uuid": "f1742491-546c-4168-baf0-c0f0c82642da", 00:14:12.261 "is_configured": true, 00:14:12.261 "data_offset": 0, 00:14:12.261 "data_size": 65536 00:14:12.261 }, 00:14:12.261 { 00:14:12.261 "name": "BaseBdev2", 00:14:12.261 "uuid": "c25240e4-320a-4239-935c-19fd5eb89cb6", 00:14:12.261 "is_configured": true, 00:14:12.261 "data_offset": 0, 00:14:12.261 "data_size": 65536 00:14:12.261 } 00:14:12.261 ] 00:14:12.261 }' 00:14:12.261 04:54:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.261 04:54:35 -- common/autotest_common.sh@10 -- # set +x 00:14:12.520 04:54:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:12.780 [2024-11-18 04:54:36.212331] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.780 [2024-11-18 04:54:36.212369] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.780 [2024-11-18 04:54:36.212462] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.039 04:54:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.040 "name": "Existed_Raid", 00:14:13.040 "uuid": "8d2141c2-d82a-4c38-97ed-f4e988705484", 00:14:13.040 "strip_size_kb": 64, 00:14:13.040 "state": "offline", 00:14:13.040 "raid_level": "raid0", 00:14:13.040 "superblock": false, 00:14:13.040 "num_base_bdevs": 2, 00:14:13.040 "num_base_bdevs_discovered": 1, 00:14:13.040 "num_base_bdevs_operational": 1, 00:14:13.040 "base_bdevs_list": [ 00:14:13.040 { 00:14:13.040 "name": null, 00:14:13.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.040 "is_configured": false, 00:14:13.040 "data_offset": 0, 00:14:13.040 "data_size": 65536 00:14:13.040 }, 00:14:13.040 { 00:14:13.040 "name": "BaseBdev2", 00:14:13.040 "uuid": "c25240e4-320a-4239-935c-19fd5eb89cb6", 00:14:13.040 "is_configured": true, 00:14:13.040 "data_offset": 0, 00:14:13.040 "data_size": 65536 00:14:13.040 } 00:14:13.040 ] 00:14:13.040 }' 00:14:13.040 04:54:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.040 04:54:36 -- common/autotest_common.sh@10 -- # set +x 00:14:13.608 04:54:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:13.608 04:54:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:13.608 04:54:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.608 04:54:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:13.608 04:54:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:13.608 04:54:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.608 04:54:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:13.868 [2024-11-18 04:54:37.365027] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.868 [2024-11-18 04:54:37.365323] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:14:14.127 04:54:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:14.127 04:54:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:14.127 04:54:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.127 04:54:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:14.386 04:54:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:14.386 04:54:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:14.386 04:54:37 -- bdev/bdev_raid.sh@287 -- # killprocess 68869 00:14:14.386 04:54:37 -- common/autotest_common.sh@936 -- # '[' -z 68869 ']' 00:14:14.386 04:54:37 -- common/autotest_common.sh@940 -- # kill -0 68869 00:14:14.386 04:54:37 -- common/autotest_common.sh@941 -- # uname 00:14:14.386 04:54:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.386 04:54:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68869 00:14:14.386 killing process with pid 68869 00:14:14.386 04:54:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:14.386 04:54:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:14.386 04:54:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68869' 00:14:14.386 04:54:37 -- common/autotest_common.sh@955 -- # kill 68869 00:14:14.386 [2024-11-18 04:54:37.737928] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.386 04:54:37 -- common/autotest_common.sh@960 -- # wait 68869 00:14:14.386 [2024-11-18 04:54:37.738050] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:15.764 00:14:15.764 real 0m9.217s 00:14:15.764 user 0m15.112s 00:14:15.764 sys 0m1.313s 00:14:15.764 ************************************ 00:14:15.764 END TEST raid_state_function_test 00:14:15.764 ************************************ 00:14:15.764 04:54:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:15.764 04:54:38 -- common/autotest_common.sh@10 -- # set +x 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:15.764 04:54:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:15.764 04:54:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.764 04:54:38 -- common/autotest_common.sh@10 -- # set +x 00:14:15.764 ************************************ 00:14:15.764 START TEST raid_state_function_test_sb 00:14:15.764 ************************************ 00:14:15.764 04:54:38 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 true 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:15.764 04:54:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=69161 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:15.765 Process raid pid: 69161 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 69161' 00:14:15.765 04:54:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 69161 /var/tmp/spdk-raid.sock 00:14:15.765 04:54:38 -- common/autotest_common.sh@829 -- # '[' -z 69161 ']' 00:14:15.765 04:54:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:15.765 04:54:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.765 04:54:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:15.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:15.765 04:54:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.765 04:54:38 -- common/autotest_common.sh@10 -- # set +x 00:14:15.765 [2024-11-18 04:54:39.031117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:15.765 [2024-11-18 04:54:39.031562] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.765 [2024-11-18 04:54:39.209101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.023 [2024-11-18 04:54:39.447122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.281 [2024-11-18 04:54:39.634941] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.541 04:54:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.541 04:54:39 -- common/autotest_common.sh@862 -- # return 0 00:14:16.541 04:54:39 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:16.800 [2024-11-18 04:54:40.244324] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.800 [2024-11-18 04:54:40.244409] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.800 [2024-11-18 04:54:40.244442] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.800 [2024-11-18 04:54:40.244472] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.800 04:54:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.058 04:54:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:17.059 "name": "Existed_Raid", 00:14:17.059 "uuid": "b46e159e-18b3-4ecb-a263-f2db23564004", 00:14:17.059 "strip_size_kb": 64, 00:14:17.059 "state": "configuring", 00:14:17.059 "raid_level": "raid0", 00:14:17.059 "superblock": true, 00:14:17.059 "num_base_bdevs": 2, 00:14:17.059 "num_base_bdevs_discovered": 0, 00:14:17.059 "num_base_bdevs_operational": 2, 00:14:17.059 "base_bdevs_list": [ 00:14:17.059 { 00:14:17.059 "name": "BaseBdev1", 00:14:17.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.059 "is_configured": false, 00:14:17.059 "data_offset": 0, 00:14:17.059 "data_size": 0 00:14:17.059 }, 00:14:17.059 { 00:14:17.059 "name": "BaseBdev2", 00:14:17.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.059 "is_configured": false, 00:14:17.059 "data_offset": 0, 00:14:17.059 "data_size": 0 00:14:17.059 } 00:14:17.059 ] 00:14:17.059 }' 00:14:17.059 04:54:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:17.059 04:54:40 -- common/autotest_common.sh@10 -- # set +x 00:14:17.626 04:54:40 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:17.626 [2024-11-18 04:54:41.140535] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.626 [2024-11-18 04:54:41.140850] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:17.886 04:54:41 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:18.145 [2024-11-18 04:54:41.412726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.145 [2024-11-18 04:54:41.413047] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.145 [2024-11-18 04:54:41.413084] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.145 [2024-11-18 04:54:41.413105] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.145 04:54:41 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:18.405 [2024-11-18 04:54:41.703954] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.405 BaseBdev1 00:14:18.405 04:54:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:18.405 04:54:41 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:18.405 04:54:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:18.405 04:54:41 -- common/autotest_common.sh@899 -- # local i 00:14:18.405 04:54:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:18.405 04:54:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:18.405 04:54:41 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.664 04:54:41 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:18.664 [ 00:14:18.664 { 00:14:18.664 "name": "BaseBdev1", 00:14:18.664 "aliases": [ 00:14:18.664 "a4fad87c-6262-41df-a0ed-d4064f2ef0bf" 00:14:18.664 ], 00:14:18.664 "product_name": "Malloc disk", 00:14:18.664 "block_size": 512, 00:14:18.664 "num_blocks": 65536, 00:14:18.664 "uuid": "a4fad87c-6262-41df-a0ed-d4064f2ef0bf", 00:14:18.664 "assigned_rate_limits": { 00:14:18.664 "rw_ios_per_sec": 0, 00:14:18.664 "rw_mbytes_per_sec": 0, 00:14:18.664 "r_mbytes_per_sec": 0, 00:14:18.664 "w_mbytes_per_sec": 0 00:14:18.664 }, 00:14:18.664 "claimed": true, 00:14:18.664 "claim_type": "exclusive_write", 00:14:18.664 "zoned": false, 00:14:18.664 "supported_io_types": { 00:14:18.664 "read": true, 00:14:18.664 "write": true, 00:14:18.664 "unmap": true, 00:14:18.664 "write_zeroes": true, 00:14:18.664 "flush": true, 00:14:18.664 "reset": true, 00:14:18.664 "compare": false, 00:14:18.664 "compare_and_write": false, 00:14:18.664 "abort": true, 00:14:18.664 "nvme_admin": false, 00:14:18.664 "nvme_io": false 00:14:18.664 }, 00:14:18.664 "memory_domains": [ 00:14:18.664 { 00:14:18.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.664 "dma_device_type": 2 00:14:18.664 } 00:14:18.664 ], 00:14:18.664 "driver_specific": {} 00:14:18.664 } 00:14:18.664 ] 00:14:18.923 04:54:42 -- common/autotest_common.sh@905 -- # return 0 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.923 04:54:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.181 04:54:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.181 "name": "Existed_Raid", 00:14:19.181 "uuid": "d788ded2-c0ef-4391-84bd-448b1432d6d0", 00:14:19.181 "strip_size_kb": 64, 00:14:19.181 "state": "configuring", 00:14:19.181 "raid_level": "raid0", 00:14:19.181 "superblock": true, 00:14:19.181 "num_base_bdevs": 2, 00:14:19.181 "num_base_bdevs_discovered": 1, 00:14:19.181 "num_base_bdevs_operational": 2, 00:14:19.181 "base_bdevs_list": [ 00:14:19.181 { 00:14:19.181 "name": "BaseBdev1", 00:14:19.181 "uuid": "a4fad87c-6262-41df-a0ed-d4064f2ef0bf", 00:14:19.181 "is_configured": true, 00:14:19.181 "data_offset": 2048, 00:14:19.181 "data_size": 63488 00:14:19.181 }, 00:14:19.181 { 00:14:19.181 "name": "BaseBdev2", 00:14:19.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.181 "is_configured": false, 00:14:19.181 "data_offset": 0, 00:14:19.181 "data_size": 0 00:14:19.181 } 00:14:19.181 ] 00:14:19.181 }' 00:14:19.181 04:54:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.181 04:54:42 -- common/autotest_common.sh@10 -- # set +x 00:14:19.439 04:54:42 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:19.698 [2024-11-18 04:54:43.008390] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.698 [2024-11-18 04:54:43.008467] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:19.698 04:54:43 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:19.698 04:54:43 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:19.955 04:54:43 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:20.214 BaseBdev1 00:14:20.214 04:54:43 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:20.214 04:54:43 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:20.214 04:54:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:20.214 04:54:43 -- common/autotest_common.sh@899 -- # local i 00:14:20.214 04:54:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:20.214 04:54:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:20.214 04:54:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:20.488 04:54:43 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:20.746 [ 00:14:20.746 { 00:14:20.746 "name": "BaseBdev1", 00:14:20.746 "aliases": [ 00:14:20.746 "e2978426-da0d-493d-8353-7178bb1823d4" 00:14:20.746 ], 00:14:20.746 "product_name": "Malloc disk", 00:14:20.746 "block_size": 512, 00:14:20.746 "num_blocks": 65536, 00:14:20.746 "uuid": "e2978426-da0d-493d-8353-7178bb1823d4", 00:14:20.746 "assigned_rate_limits": { 00:14:20.746 "rw_ios_per_sec": 0, 00:14:20.746 "rw_mbytes_per_sec": 0, 00:14:20.746 "r_mbytes_per_sec": 0, 00:14:20.746 "w_mbytes_per_sec": 0 00:14:20.746 }, 00:14:20.746 "claimed": false, 00:14:20.746 "zoned": false, 00:14:20.746 "supported_io_types": { 00:14:20.746 "read": true, 00:14:20.746 "write": true, 00:14:20.746 "unmap": true, 00:14:20.746 "write_zeroes": true, 00:14:20.746 "flush": true, 00:14:20.746 "reset": true, 00:14:20.746 "compare": false, 00:14:20.746 "compare_and_write": false, 00:14:20.746 "abort": true, 00:14:20.746 "nvme_admin": false, 00:14:20.746 "nvme_io": false 00:14:20.746 }, 00:14:20.746 "memory_domains": [ 00:14:20.746 { 00:14:20.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.746 "dma_device_type": 2 00:14:20.746 } 00:14:20.746 ], 00:14:20.746 "driver_specific": {} 00:14:20.746 } 00:14:20.746 ] 00:14:20.746 04:54:44 -- common/autotest_common.sh@905 -- # return 0 00:14:20.746 04:54:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:21.005 [2024-11-18 04:54:44.349174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.005 [2024-11-18 04:54:44.351675] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.005 [2024-11-18 04:54:44.351911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.005 04:54:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.263 04:54:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:21.263 "name": "Existed_Raid", 00:14:21.263 "uuid": "83adaf0e-5a98-4c9f-82ae-1ea1838da0b9", 00:14:21.263 "strip_size_kb": 64, 00:14:21.263 "state": "configuring", 00:14:21.263 "raid_level": "raid0", 00:14:21.263 "superblock": true, 00:14:21.263 "num_base_bdevs": 2, 00:14:21.263 "num_base_bdevs_discovered": 1, 00:14:21.263 "num_base_bdevs_operational": 2, 00:14:21.263 "base_bdevs_list": [ 00:14:21.263 { 00:14:21.263 "name": "BaseBdev1", 00:14:21.263 "uuid": "e2978426-da0d-493d-8353-7178bb1823d4", 00:14:21.263 "is_configured": true, 00:14:21.263 "data_offset": 2048, 00:14:21.263 "data_size": 63488 00:14:21.263 }, 00:14:21.263 { 00:14:21.263 "name": "BaseBdev2", 00:14:21.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.263 "is_configured": false, 00:14:21.263 "data_offset": 0, 00:14:21.263 "data_size": 0 00:14:21.263 } 00:14:21.263 ] 00:14:21.263 }' 00:14:21.263 04:54:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:21.263 04:54:44 -- common/autotest_common.sh@10 -- # set +x 00:14:21.521 04:54:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:21.780 [2024-11-18 04:54:45.249141] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.780 [2024-11-18 04:54:45.249492] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:14:21.780 [2024-11-18 04:54:45.249512] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:21.780 [2024-11-18 04:54:45.249690] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:21.780 [2024-11-18 04:54:45.250053] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:14:21.780 [2024-11-18 04:54:45.250075] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:14:21.780 [2024-11-18 04:54:45.250233] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.780 BaseBdev2 00:14:21.780 04:54:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:21.780 04:54:45 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:21.780 04:54:45 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:21.780 04:54:45 -- common/autotest_common.sh@899 -- # local i 00:14:21.780 04:54:45 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:21.780 04:54:45 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:21.780 04:54:45 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.038 04:54:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.297 [ 00:14:22.297 { 00:14:22.297 "name": "BaseBdev2", 00:14:22.297 "aliases": [ 00:14:22.297 "75e0c2b1-318e-4385-8af6-c284798bb214" 00:14:22.297 ], 00:14:22.297 "product_name": "Malloc disk", 00:14:22.297 "block_size": 512, 00:14:22.297 "num_blocks": 65536, 00:14:22.297 "uuid": "75e0c2b1-318e-4385-8af6-c284798bb214", 00:14:22.297 "assigned_rate_limits": { 00:14:22.297 "rw_ios_per_sec": 0, 00:14:22.297 "rw_mbytes_per_sec": 0, 00:14:22.297 "r_mbytes_per_sec": 0, 00:14:22.297 "w_mbytes_per_sec": 0 00:14:22.297 }, 00:14:22.297 "claimed": true, 00:14:22.297 "claim_type": "exclusive_write", 00:14:22.297 "zoned": false, 00:14:22.297 "supported_io_types": { 00:14:22.297 "read": true, 00:14:22.297 "write": true, 00:14:22.297 "unmap": true, 00:14:22.297 "write_zeroes": true, 00:14:22.297 "flush": true, 00:14:22.297 "reset": true, 00:14:22.297 "compare": false, 00:14:22.297 "compare_and_write": false, 00:14:22.297 "abort": true, 00:14:22.297 "nvme_admin": false, 00:14:22.297 "nvme_io": false 00:14:22.297 }, 00:14:22.297 "memory_domains": [ 00:14:22.297 { 00:14:22.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.297 "dma_device_type": 2 00:14:22.297 } 00:14:22.297 ], 00:14:22.297 "driver_specific": {} 00:14:22.297 } 00:14:22.297 ] 00:14:22.297 04:54:45 -- common/autotest_common.sh@905 -- # return 0 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.297 04:54:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.555 04:54:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.555 "name": "Existed_Raid", 00:14:22.555 "uuid": "83adaf0e-5a98-4c9f-82ae-1ea1838da0b9", 00:14:22.555 "strip_size_kb": 64, 00:14:22.555 "state": "online", 00:14:22.555 "raid_level": "raid0", 00:14:22.555 "superblock": true, 00:14:22.555 "num_base_bdevs": 2, 00:14:22.555 "num_base_bdevs_discovered": 2, 00:14:22.555 "num_base_bdevs_operational": 2, 00:14:22.555 "base_bdevs_list": [ 00:14:22.555 { 00:14:22.555 "name": "BaseBdev1", 00:14:22.555 "uuid": "e2978426-da0d-493d-8353-7178bb1823d4", 00:14:22.555 "is_configured": true, 00:14:22.555 "data_offset": 2048, 00:14:22.555 "data_size": 63488 00:14:22.555 }, 00:14:22.555 { 00:14:22.555 "name": "BaseBdev2", 00:14:22.555 "uuid": "75e0c2b1-318e-4385-8af6-c284798bb214", 00:14:22.555 "is_configured": true, 00:14:22.555 "data_offset": 2048, 00:14:22.555 "data_size": 63488 00:14:22.555 } 00:14:22.555 ] 00:14:22.555 }' 00:14:22.555 04:54:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.555 04:54:46 -- common/autotest_common.sh@10 -- # set +x 00:14:23.122 04:54:46 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:23.122 [2024-11-18 04:54:46.621659] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:23.122 [2024-11-18 04:54:46.621908] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.122 [2024-11-18 04:54:46.622091] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.381 04:54:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.639 04:54:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:23.639 "name": "Existed_Raid", 00:14:23.639 "uuid": "83adaf0e-5a98-4c9f-82ae-1ea1838da0b9", 00:14:23.639 "strip_size_kb": 64, 00:14:23.639 "state": "offline", 00:14:23.639 "raid_level": "raid0", 00:14:23.639 "superblock": true, 00:14:23.639 "num_base_bdevs": 2, 00:14:23.639 "num_base_bdevs_discovered": 1, 00:14:23.639 "num_base_bdevs_operational": 1, 00:14:23.639 "base_bdevs_list": [ 00:14:23.639 { 00:14:23.639 "name": null, 00:14:23.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.639 "is_configured": false, 00:14:23.639 "data_offset": 2048, 00:14:23.639 "data_size": 63488 00:14:23.639 }, 00:14:23.639 { 00:14:23.639 "name": "BaseBdev2", 00:14:23.639 "uuid": "75e0c2b1-318e-4385-8af6-c284798bb214", 00:14:23.639 "is_configured": true, 00:14:23.639 "data_offset": 2048, 00:14:23.639 "data_size": 63488 00:14:23.639 } 00:14:23.639 ] 00:14:23.639 }' 00:14:23.639 04:54:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:23.639 04:54:46 -- common/autotest_common.sh@10 -- # set +x 00:14:23.897 04:54:47 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:23.897 04:54:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:23.897 04:54:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.897 04:54:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:24.155 04:54:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:24.155 04:54:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:24.155 04:54:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:24.413 [2024-11-18 04:54:47.767605] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:24.413 [2024-11-18 04:54:47.767680] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:14:24.413 04:54:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:24.413 04:54:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:24.413 04:54:47 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.413 04:54:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:24.672 04:54:48 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:24.672 04:54:48 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:24.672 04:54:48 -- bdev/bdev_raid.sh@287 -- # killprocess 69161 00:14:24.672 04:54:48 -- common/autotest_common.sh@936 -- # '[' -z 69161 ']' 00:14:24.672 04:54:48 -- common/autotest_common.sh@940 -- # kill -0 69161 00:14:24.672 04:54:48 -- common/autotest_common.sh@941 -- # uname 00:14:24.672 04:54:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:24.672 04:54:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69161 00:14:24.672 killing process with pid 69161 00:14:24.672 04:54:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:24.672 04:54:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:24.672 04:54:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69161' 00:14:24.672 04:54:48 -- common/autotest_common.sh@955 -- # kill 69161 00:14:24.672 [2024-11-18 04:54:48.173934] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.672 04:54:48 -- common/autotest_common.sh@960 -- # wait 69161 00:14:24.672 [2024-11-18 04:54:48.174041] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.048 ************************************ 00:14:26.048 END TEST raid_state_function_test_sb 00:14:26.048 ************************************ 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:26.048 00:14:26.048 real 0m10.334s 00:14:26.048 user 0m17.082s 00:14:26.048 sys 0m1.472s 00:14:26.048 04:54:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:26.048 04:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:26.048 04:54:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:26.048 04:54:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:26.048 04:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:26.048 ************************************ 00:14:26.048 START TEST raid_superblock_test 00:14:26.048 ************************************ 00:14:26.048 04:54:49 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 2 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@357 -- # raid_pid=69467 00:14:26.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@358 -- # waitforlisten 69467 /var/tmp/spdk-raid.sock 00:14:26.048 04:54:49 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:26.048 04:54:49 -- common/autotest_common.sh@829 -- # '[' -z 69467 ']' 00:14:26.048 04:54:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:26.048 04:54:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.048 04:54:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:26.048 04:54:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.048 04:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:26.048 [2024-11-18 04:54:49.408681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:26.048 [2024-11-18 04:54:49.408872] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69467 ] 00:14:26.328 [2024-11-18 04:54:49.579886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.328 [2024-11-18 04:54:49.763874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.593 [2024-11-18 04:54:49.941470] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.160 04:54:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.160 04:54:50 -- common/autotest_common.sh@862 -- # return 0 00:14:27.160 04:54:50 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:27.160 04:54:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:27.160 04:54:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:27.160 04:54:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:27.160 04:54:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:27.160 04:54:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.160 04:54:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.160 04:54:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.160 04:54:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:27.160 malloc1 00:14:27.160 04:54:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.419 [2024-11-18 04:54:50.853896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.419 [2024-11-18 04:54:50.854171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.419 [2024-11-18 04:54:50.854377] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:14:27.419 [2024-11-18 04:54:50.854504] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.419 [2024-11-18 04:54:50.857281] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.419 [2024-11-18 04:54:50.857455] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.419 pt1 00:14:27.419 04:54:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:27.419 04:54:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:27.419 04:54:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:27.419 04:54:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:27.419 04:54:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:27.419 04:54:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.419 04:54:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.419 04:54:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.419 04:54:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:27.677 malloc2 00:14:27.677 04:54:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:27.935 [2024-11-18 04:54:51.376164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:27.935 [2024-11-18 04:54:51.376287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.935 [2024-11-18 04:54:51.376324] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:14:27.935 [2024-11-18 04:54:51.376339] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.935 [2024-11-18 04:54:51.378954] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.935 [2024-11-18 04:54:51.379024] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:27.935 pt2 00:14:27.935 04:54:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:27.935 04:54:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:27.935 04:54:51 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:28.194 [2024-11-18 04:54:51.624298] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:28.194 [2024-11-18 04:54:51.626606] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.194 [2024-11-18 04:54:51.626853] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:14:28.194 [2024-11-18 04:54:51.626874] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:28.194 [2024-11-18 04:54:51.627046] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:28.194 [2024-11-18 04:54:51.627467] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:14:28.194 [2024-11-18 04:54:51.627493] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:14:28.194 [2024-11-18 04:54:51.627668] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.194 04:54:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.452 04:54:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:28.452 "name": "raid_bdev1", 00:14:28.452 "uuid": "1fad96d3-68b6-429d-8a35-91568dd51537", 00:14:28.452 "strip_size_kb": 64, 00:14:28.452 "state": "online", 00:14:28.452 "raid_level": "raid0", 00:14:28.452 "superblock": true, 00:14:28.452 "num_base_bdevs": 2, 00:14:28.452 "num_base_bdevs_discovered": 2, 00:14:28.452 "num_base_bdevs_operational": 2, 00:14:28.452 "base_bdevs_list": [ 00:14:28.452 { 00:14:28.452 "name": "pt1", 00:14:28.452 "uuid": "3d3a6795-754a-5614-9f9c-4efb772712f4", 00:14:28.452 "is_configured": true, 00:14:28.452 "data_offset": 2048, 00:14:28.452 "data_size": 63488 00:14:28.452 }, 00:14:28.452 { 00:14:28.452 "name": "pt2", 00:14:28.452 "uuid": "ea6a3321-9cbe-5d2a-ab0f-b53e735ec1fc", 00:14:28.452 "is_configured": true, 00:14:28.452 "data_offset": 2048, 00:14:28.452 "data_size": 63488 00:14:28.452 } 00:14:28.452 ] 00:14:28.452 }' 00:14:28.452 04:54:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:28.452 04:54:51 -- common/autotest_common.sh@10 -- # set +x 00:14:28.710 04:54:52 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:28.710 04:54:52 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:28.968 [2024-11-18 04:54:52.384693] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.968 04:54:52 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1fad96d3-68b6-429d-8a35-91568dd51537 00:14:28.968 04:54:52 -- bdev/bdev_raid.sh@380 -- # '[' -z 1fad96d3-68b6-429d-8a35-91568dd51537 ']' 00:14:28.968 04:54:52 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:29.226 [2024-11-18 04:54:52.648483] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.226 [2024-11-18 04:54:52.648755] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.226 [2024-11-18 04:54:52.648881] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.226 [2024-11-18 04:54:52.648964] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.226 [2024-11-18 04:54:52.648981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:14:29.226 04:54:52 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.226 04:54:52 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:29.484 04:54:52 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:29.484 04:54:52 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:29.484 04:54:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:29.484 04:54:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:29.743 04:54:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:29.743 04:54:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:30.001 04:54:53 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:30.001 04:54:53 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:30.260 04:54:53 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:30.260 04:54:53 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:30.260 04:54:53 -- common/autotest_common.sh@650 -- # local es=0 00:14:30.260 04:54:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:30.260 04:54:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.260 04:54:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.260 04:54:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.260 04:54:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.260 04:54:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.260 04:54:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.260 04:54:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.260 04:54:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:30.260 04:54:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:30.519 [2024-11-18 04:54:53.828786] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:30.519 [2024-11-18 04:54:53.830934] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:30.519 [2024-11-18 04:54:53.831052] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:30.519 [2024-11-18 04:54:53.831141] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:30.519 [2024-11-18 04:54:53.831173] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.519 [2024-11-18 04:54:53.831188] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:14:30.519 request: 00:14:30.519 { 00:14:30.519 "name": "raid_bdev1", 00:14:30.519 "raid_level": "raid0", 00:14:30.519 "base_bdevs": [ 00:14:30.519 "malloc1", 00:14:30.519 "malloc2" 00:14:30.519 ], 00:14:30.519 "superblock": false, 00:14:30.519 "strip_size_kb": 64, 00:14:30.519 "method": "bdev_raid_create", 00:14:30.519 "req_id": 1 00:14:30.519 } 00:14:30.519 Got JSON-RPC error response 00:14:30.519 response: 00:14:30.519 { 00:14:30.519 "code": -17, 00:14:30.519 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:30.519 } 00:14:30.519 04:54:53 -- common/autotest_common.sh@653 -- # es=1 00:14:30.519 04:54:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:30.519 04:54:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:30.519 04:54:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:30.519 04:54:53 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.519 04:54:53 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:30.778 [2024-11-18 04:54:54.252868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:30.778 [2024-11-18 04:54:54.252956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.778 [2024-11-18 04:54:54.252988] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:14:30.778 [2024-11-18 04:54:54.253002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.778 [2024-11-18 04:54:54.255546] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.778 [2024-11-18 04:54:54.255588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:30.778 [2024-11-18 04:54:54.255710] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:30.778 [2024-11-18 04:54:54.255766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:30.778 pt1 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.778 04:54:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.036 04:54:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:31.036 "name": "raid_bdev1", 00:14:31.036 "uuid": "1fad96d3-68b6-429d-8a35-91568dd51537", 00:14:31.036 "strip_size_kb": 64, 00:14:31.036 "state": "configuring", 00:14:31.036 "raid_level": "raid0", 00:14:31.036 "superblock": true, 00:14:31.036 "num_base_bdevs": 2, 00:14:31.036 "num_base_bdevs_discovered": 1, 00:14:31.036 "num_base_bdevs_operational": 2, 00:14:31.036 "base_bdevs_list": [ 00:14:31.036 { 00:14:31.036 "name": "pt1", 00:14:31.036 "uuid": "3d3a6795-754a-5614-9f9c-4efb772712f4", 00:14:31.036 "is_configured": true, 00:14:31.036 "data_offset": 2048, 00:14:31.036 "data_size": 63488 00:14:31.036 }, 00:14:31.036 { 00:14:31.036 "name": null, 00:14:31.036 "uuid": "ea6a3321-9cbe-5d2a-ab0f-b53e735ec1fc", 00:14:31.036 "is_configured": false, 00:14:31.036 "data_offset": 2048, 00:14:31.036 "data_size": 63488 00:14:31.036 } 00:14:31.036 ] 00:14:31.036 }' 00:14:31.036 04:54:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:31.036 04:54:54 -- common/autotest_common.sh@10 -- # set +x 00:14:31.295 04:54:54 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:31.295 04:54:54 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:31.295 04:54:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:31.295 04:54:54 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:31.554 [2024-11-18 04:54:55.049075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:31.554 [2024-11-18 04:54:55.049171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.554 [2024-11-18 04:54:55.049245] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:14:31.554 [2024-11-18 04:54:55.049264] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.554 [2024-11-18 04:54:55.049791] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.554 [2024-11-18 04:54:55.049815] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:31.554 [2024-11-18 04:54:55.049918] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:31.554 [2024-11-18 04:54:55.049945] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.554 [2024-11-18 04:54:55.050077] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:14:31.554 [2024-11-18 04:54:55.050092] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:31.554 [2024-11-18 04:54:55.050216] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:31.554 [2024-11-18 04:54:55.050639] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:14:31.554 [2024-11-18 04:54:55.050699] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:14:31.554 [2024-11-18 04:54:55.050869] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.554 pt2 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.554 04:54:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.813 04:54:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:31.813 "name": "raid_bdev1", 00:14:31.813 "uuid": "1fad96d3-68b6-429d-8a35-91568dd51537", 00:14:31.813 "strip_size_kb": 64, 00:14:31.813 "state": "online", 00:14:31.813 "raid_level": "raid0", 00:14:31.813 "superblock": true, 00:14:31.813 "num_base_bdevs": 2, 00:14:31.813 "num_base_bdevs_discovered": 2, 00:14:31.813 "num_base_bdevs_operational": 2, 00:14:31.813 "base_bdevs_list": [ 00:14:31.813 { 00:14:31.813 "name": "pt1", 00:14:31.813 "uuid": "3d3a6795-754a-5614-9f9c-4efb772712f4", 00:14:31.813 "is_configured": true, 00:14:31.813 "data_offset": 2048, 00:14:31.813 "data_size": 63488 00:14:31.813 }, 00:14:31.813 { 00:14:31.813 "name": "pt2", 00:14:31.813 "uuid": "ea6a3321-9cbe-5d2a-ab0f-b53e735ec1fc", 00:14:31.813 "is_configured": true, 00:14:31.813 "data_offset": 2048, 00:14:31.813 "data_size": 63488 00:14:31.813 } 00:14:31.813 ] 00:14:31.813 }' 00:14:31.813 04:54:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:31.813 04:54:55 -- common/autotest_common.sh@10 -- # set +x 00:14:32.380 04:54:55 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:32.380 04:54:55 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:32.380 [2024-11-18 04:54:55.861695] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.380 04:54:55 -- bdev/bdev_raid.sh@430 -- # '[' 1fad96d3-68b6-429d-8a35-91568dd51537 '!=' 1fad96d3-68b6-429d-8a35-91568dd51537 ']' 00:14:32.380 04:54:55 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:32.380 04:54:55 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:32.380 04:54:55 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:32.380 04:54:55 -- bdev/bdev_raid.sh@511 -- # killprocess 69467 00:14:32.380 04:54:55 -- common/autotest_common.sh@936 -- # '[' -z 69467 ']' 00:14:32.380 04:54:55 -- common/autotest_common.sh@940 -- # kill -0 69467 00:14:32.380 04:54:55 -- common/autotest_common.sh@941 -- # uname 00:14:32.380 04:54:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:32.380 04:54:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69467 00:14:32.638 killing process with pid 69467 00:14:32.638 04:54:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:32.638 04:54:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:32.638 04:54:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69467' 00:14:32.638 04:54:55 -- common/autotest_common.sh@955 -- # kill 69467 00:14:32.638 [2024-11-18 04:54:55.912311] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.638 04:54:55 -- common/autotest_common.sh@960 -- # wait 69467 00:14:32.638 [2024-11-18 04:54:55.912399] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.638 [2024-11-18 04:54:55.912466] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.638 [2024-11-18 04:54:55.912491] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:14:32.638 [2024-11-18 04:54:56.080661] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.013 ************************************ 00:14:34.013 END TEST raid_superblock_test 00:14:34.013 ************************************ 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:34.013 00:14:34.013 real 0m7.913s 00:14:34.013 user 0m12.689s 00:14:34.013 sys 0m1.080s 00:14:34.013 04:54:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:34.013 04:54:57 -- common/autotest_common.sh@10 -- # set +x 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:34.013 04:54:57 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:34.013 04:54:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.013 04:54:57 -- common/autotest_common.sh@10 -- # set +x 00:14:34.013 ************************************ 00:14:34.013 START TEST raid_state_function_test 00:14:34.013 ************************************ 00:14:34.013 04:54:57 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 false 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@226 -- # raid_pid=69696 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 69696' 00:14:34.013 Process raid pid: 69696 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@228 -- # waitforlisten 69696 /var/tmp/spdk-raid.sock 00:14:34.013 04:54:57 -- common/autotest_common.sh@829 -- # '[' -z 69696 ']' 00:14:34.013 04:54:57 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:34.013 04:54:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:34.013 04:54:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:34.013 04:54:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:34.013 04:54:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.013 04:54:57 -- common/autotest_common.sh@10 -- # set +x 00:14:34.013 [2024-11-18 04:54:57.384593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:34.013 [2024-11-18 04:54:57.384755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.272 [2024-11-18 04:54:57.559884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.531 [2024-11-18 04:54:57.797531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.531 [2024-11-18 04:54:57.988069] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.097 04:54:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.097 04:54:58 -- common/autotest_common.sh@862 -- # return 0 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:35.097 [2024-11-18 04:54:58.554764] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.097 [2024-11-18 04:54:58.554830] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.097 [2024-11-18 04:54:58.554847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.097 [2024-11-18 04:54:58.554863] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.097 04:54:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.356 04:54:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.356 "name": "Existed_Raid", 00:14:35.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.356 "strip_size_kb": 64, 00:14:35.356 "state": "configuring", 00:14:35.356 "raid_level": "concat", 00:14:35.356 "superblock": false, 00:14:35.356 "num_base_bdevs": 2, 00:14:35.356 "num_base_bdevs_discovered": 0, 00:14:35.356 "num_base_bdevs_operational": 2, 00:14:35.356 "base_bdevs_list": [ 00:14:35.356 { 00:14:35.356 "name": "BaseBdev1", 00:14:35.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.356 "is_configured": false, 00:14:35.356 "data_offset": 0, 00:14:35.356 "data_size": 0 00:14:35.356 }, 00:14:35.356 { 00:14:35.356 "name": "BaseBdev2", 00:14:35.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.356 "is_configured": false, 00:14:35.356 "data_offset": 0, 00:14:35.356 "data_size": 0 00:14:35.356 } 00:14:35.356 ] 00:14:35.356 }' 00:14:35.356 04:54:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.356 04:54:58 -- common/autotest_common.sh@10 -- # set +x 00:14:35.614 04:54:59 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:35.873 [2024-11-18 04:54:59.294885] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.873 [2024-11-18 04:54:59.294942] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:35.873 04:54:59 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:36.131 [2024-11-18 04:54:59.515002] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.131 [2024-11-18 04:54:59.515111] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.131 [2024-11-18 04:54:59.515134] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.131 [2024-11-18 04:54:59.515151] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.131 04:54:59 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.390 [2024-11-18 04:54:59.758620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.390 BaseBdev1 00:14:36.390 04:54:59 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:36.390 04:54:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:36.390 04:54:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:36.390 04:54:59 -- common/autotest_common.sh@899 -- # local i 00:14:36.390 04:54:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:36.390 04:54:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:36.390 04:54:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:36.649 04:54:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.907 [ 00:14:36.907 { 00:14:36.907 "name": "BaseBdev1", 00:14:36.907 "aliases": [ 00:14:36.907 "3b02b0e3-93ce-45bd-bb25-6e70b8e9c079" 00:14:36.907 ], 00:14:36.907 "product_name": "Malloc disk", 00:14:36.907 "block_size": 512, 00:14:36.907 "num_blocks": 65536, 00:14:36.907 "uuid": "3b02b0e3-93ce-45bd-bb25-6e70b8e9c079", 00:14:36.907 "assigned_rate_limits": { 00:14:36.907 "rw_ios_per_sec": 0, 00:14:36.907 "rw_mbytes_per_sec": 0, 00:14:36.907 "r_mbytes_per_sec": 0, 00:14:36.907 "w_mbytes_per_sec": 0 00:14:36.907 }, 00:14:36.907 "claimed": true, 00:14:36.907 "claim_type": "exclusive_write", 00:14:36.907 "zoned": false, 00:14:36.907 "supported_io_types": { 00:14:36.907 "read": true, 00:14:36.907 "write": true, 00:14:36.907 "unmap": true, 00:14:36.907 "write_zeroes": true, 00:14:36.907 "flush": true, 00:14:36.907 "reset": true, 00:14:36.907 "compare": false, 00:14:36.907 "compare_and_write": false, 00:14:36.907 "abort": true, 00:14:36.907 "nvme_admin": false, 00:14:36.907 "nvme_io": false 00:14:36.907 }, 00:14:36.907 "memory_domains": [ 00:14:36.907 { 00:14:36.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.907 "dma_device_type": 2 00:14:36.907 } 00:14:36.907 ], 00:14:36.907 "driver_specific": {} 00:14:36.907 } 00:14:36.907 ] 00:14:36.908 04:55:00 -- common/autotest_common.sh@905 -- # return 0 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.908 04:55:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.166 04:55:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.166 "name": "Existed_Raid", 00:14:37.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.166 "strip_size_kb": 64, 00:14:37.166 "state": "configuring", 00:14:37.166 "raid_level": "concat", 00:14:37.166 "superblock": false, 00:14:37.166 "num_base_bdevs": 2, 00:14:37.166 "num_base_bdevs_discovered": 1, 00:14:37.166 "num_base_bdevs_operational": 2, 00:14:37.166 "base_bdevs_list": [ 00:14:37.166 { 00:14:37.166 "name": "BaseBdev1", 00:14:37.166 "uuid": "3b02b0e3-93ce-45bd-bb25-6e70b8e9c079", 00:14:37.166 "is_configured": true, 00:14:37.166 "data_offset": 0, 00:14:37.166 "data_size": 65536 00:14:37.166 }, 00:14:37.166 { 00:14:37.166 "name": "BaseBdev2", 00:14:37.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.166 "is_configured": false, 00:14:37.166 "data_offset": 0, 00:14:37.166 "data_size": 0 00:14:37.166 } 00:14:37.166 ] 00:14:37.166 }' 00:14:37.166 04:55:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.166 04:55:00 -- common/autotest_common.sh@10 -- # set +x 00:14:37.425 04:55:00 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:37.683 [2024-11-18 04:55:01.051058] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.683 [2024-11-18 04:55:01.051120] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:37.683 04:55:01 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:37.683 04:55:01 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:37.941 [2024-11-18 04:55:01.307183] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.941 [2024-11-18 04:55:01.309457] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.941 [2024-11-18 04:55:01.309523] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.941 04:55:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.199 04:55:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.199 "name": "Existed_Raid", 00:14:38.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.199 "strip_size_kb": 64, 00:14:38.199 "state": "configuring", 00:14:38.199 "raid_level": "concat", 00:14:38.199 "superblock": false, 00:14:38.199 "num_base_bdevs": 2, 00:14:38.199 "num_base_bdevs_discovered": 1, 00:14:38.199 "num_base_bdevs_operational": 2, 00:14:38.199 "base_bdevs_list": [ 00:14:38.199 { 00:14:38.199 "name": "BaseBdev1", 00:14:38.199 "uuid": "3b02b0e3-93ce-45bd-bb25-6e70b8e9c079", 00:14:38.199 "is_configured": true, 00:14:38.199 "data_offset": 0, 00:14:38.199 "data_size": 65536 00:14:38.199 }, 00:14:38.199 { 00:14:38.199 "name": "BaseBdev2", 00:14:38.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.199 "is_configured": false, 00:14:38.199 "data_offset": 0, 00:14:38.199 "data_size": 0 00:14:38.199 } 00:14:38.199 ] 00:14:38.199 }' 00:14:38.199 04:55:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.199 04:55:01 -- common/autotest_common.sh@10 -- # set +x 00:14:38.458 04:55:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:38.716 [2024-11-18 04:55:02.167194] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.716 [2024-11-18 04:55:02.167288] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:38.716 [2024-11-18 04:55:02.167303] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:38.716 [2024-11-18 04:55:02.167443] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:38.716 [2024-11-18 04:55:02.167921] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:38.716 [2024-11-18 04:55:02.167956] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:14:38.716 [2024-11-18 04:55:02.168269] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.716 BaseBdev2 00:14:38.716 04:55:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:38.716 04:55:02 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:38.716 04:55:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:38.716 04:55:02 -- common/autotest_common.sh@899 -- # local i 00:14:38.716 04:55:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:38.716 04:55:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:38.716 04:55:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:38.974 04:55:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:39.233 [ 00:14:39.233 { 00:14:39.233 "name": "BaseBdev2", 00:14:39.233 "aliases": [ 00:14:39.233 "555bd6e6-846a-4d14-a7d6-b3509ab45278" 00:14:39.233 ], 00:14:39.233 "product_name": "Malloc disk", 00:14:39.233 "block_size": 512, 00:14:39.233 "num_blocks": 65536, 00:14:39.233 "uuid": "555bd6e6-846a-4d14-a7d6-b3509ab45278", 00:14:39.233 "assigned_rate_limits": { 00:14:39.233 "rw_ios_per_sec": 0, 00:14:39.233 "rw_mbytes_per_sec": 0, 00:14:39.233 "r_mbytes_per_sec": 0, 00:14:39.233 "w_mbytes_per_sec": 0 00:14:39.233 }, 00:14:39.233 "claimed": true, 00:14:39.233 "claim_type": "exclusive_write", 00:14:39.233 "zoned": false, 00:14:39.233 "supported_io_types": { 00:14:39.233 "read": true, 00:14:39.233 "write": true, 00:14:39.233 "unmap": true, 00:14:39.233 "write_zeroes": true, 00:14:39.233 "flush": true, 00:14:39.233 "reset": true, 00:14:39.233 "compare": false, 00:14:39.233 "compare_and_write": false, 00:14:39.233 "abort": true, 00:14:39.233 "nvme_admin": false, 00:14:39.233 "nvme_io": false 00:14:39.233 }, 00:14:39.233 "memory_domains": [ 00:14:39.233 { 00:14:39.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.233 "dma_device_type": 2 00:14:39.233 } 00:14:39.233 ], 00:14:39.233 "driver_specific": {} 00:14:39.233 } 00:14:39.233 ] 00:14:39.233 04:55:02 -- common/autotest_common.sh@905 -- # return 0 00:14:39.233 04:55:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:39.233 04:55:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:39.233 04:55:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:39.233 04:55:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:39.233 04:55:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:39.233 04:55:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:39.233 04:55:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:39.234 04:55:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:39.234 04:55:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:39.234 04:55:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:39.234 04:55:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:39.234 04:55:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:39.234 04:55:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.234 04:55:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.492 04:55:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:39.492 "name": "Existed_Raid", 00:14:39.492 "uuid": "0257ee49-b26c-443d-8b0a-85bd812cdb13", 00:14:39.492 "strip_size_kb": 64, 00:14:39.492 "state": "online", 00:14:39.492 "raid_level": "concat", 00:14:39.492 "superblock": false, 00:14:39.492 "num_base_bdevs": 2, 00:14:39.492 "num_base_bdevs_discovered": 2, 00:14:39.492 "num_base_bdevs_operational": 2, 00:14:39.492 "base_bdevs_list": [ 00:14:39.492 { 00:14:39.492 "name": "BaseBdev1", 00:14:39.492 "uuid": "3b02b0e3-93ce-45bd-bb25-6e70b8e9c079", 00:14:39.492 "is_configured": true, 00:14:39.492 "data_offset": 0, 00:14:39.492 "data_size": 65536 00:14:39.492 }, 00:14:39.492 { 00:14:39.492 "name": "BaseBdev2", 00:14:39.492 "uuid": "555bd6e6-846a-4d14-a7d6-b3509ab45278", 00:14:39.492 "is_configured": true, 00:14:39.492 "data_offset": 0, 00:14:39.492 "data_size": 65536 00:14:39.492 } 00:14:39.492 ] 00:14:39.492 }' 00:14:39.492 04:55:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:39.492 04:55:02 -- common/autotest_common.sh@10 -- # set +x 00:14:39.751 04:55:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:40.010 [2024-11-18 04:55:03.483648] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:40.010 [2024-11-18 04:55:03.483702] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.010 [2024-11-18 04:55:03.483787] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.269 04:55:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.529 04:55:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:40.529 "name": "Existed_Raid", 00:14:40.529 "uuid": "0257ee49-b26c-443d-8b0a-85bd812cdb13", 00:14:40.529 "strip_size_kb": 64, 00:14:40.529 "state": "offline", 00:14:40.529 "raid_level": "concat", 00:14:40.529 "superblock": false, 00:14:40.529 "num_base_bdevs": 2, 00:14:40.529 "num_base_bdevs_discovered": 1, 00:14:40.529 "num_base_bdevs_operational": 1, 00:14:40.529 "base_bdevs_list": [ 00:14:40.529 { 00:14:40.529 "name": null, 00:14:40.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.529 "is_configured": false, 00:14:40.529 "data_offset": 0, 00:14:40.529 "data_size": 65536 00:14:40.529 }, 00:14:40.529 { 00:14:40.529 "name": "BaseBdev2", 00:14:40.529 "uuid": "555bd6e6-846a-4d14-a7d6-b3509ab45278", 00:14:40.529 "is_configured": true, 00:14:40.529 "data_offset": 0, 00:14:40.529 "data_size": 65536 00:14:40.529 } 00:14:40.529 ] 00:14:40.529 }' 00:14:40.529 04:55:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:40.529 04:55:03 -- common/autotest_common.sh@10 -- # set +x 00:14:40.800 04:55:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:40.800 04:55:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:40.800 04:55:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:40.800 04:55:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.098 04:55:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:41.098 04:55:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:41.098 04:55:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:41.366 [2024-11-18 04:55:04.693097] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.366 [2024-11-18 04:55:04.693183] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:14:41.366 04:55:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:41.366 04:55:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:41.366 04:55:04 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.366 04:55:04 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:41.625 04:55:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:41.625 04:55:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:41.625 04:55:05 -- bdev/bdev_raid.sh@287 -- # killprocess 69696 00:14:41.625 04:55:05 -- common/autotest_common.sh@936 -- # '[' -z 69696 ']' 00:14:41.625 04:55:05 -- common/autotest_common.sh@940 -- # kill -0 69696 00:14:41.625 04:55:05 -- common/autotest_common.sh@941 -- # uname 00:14:41.625 04:55:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:41.625 04:55:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69696 00:14:41.625 killing process with pid 69696 00:14:41.625 04:55:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:41.625 04:55:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:41.625 04:55:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69696' 00:14:41.625 04:55:05 -- common/autotest_common.sh@955 -- # kill 69696 00:14:41.625 [2024-11-18 04:55:05.070388] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.625 04:55:05 -- common/autotest_common.sh@960 -- # wait 69696 00:14:41.625 [2024-11-18 04:55:05.070511] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.004 04:55:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:43.004 00:14:43.004 real 0m8.827s 00:14:43.004 user 0m14.421s 00:14:43.004 sys 0m1.338s 00:14:43.004 04:55:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:43.004 04:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:43.004 ************************************ 00:14:43.004 END TEST raid_state_function_test 00:14:43.004 ************************************ 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:43.005 04:55:06 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:43.005 04:55:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:43.005 04:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:43.005 ************************************ 00:14:43.005 START TEST raid_state_function_test_sb 00:14:43.005 ************************************ 00:14:43.005 04:55:06 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 true 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=69990 00:14:43.005 Process raid pid: 69990 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 69990' 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 69990 /var/tmp/spdk-raid.sock 00:14:43.005 04:55:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:43.005 04:55:06 -- common/autotest_common.sh@829 -- # '[' -z 69990 ']' 00:14:43.005 04:55:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:43.005 04:55:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:43.005 04:55:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:43.005 04:55:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.005 04:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:43.005 [2024-11-18 04:55:06.262643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:43.005 [2024-11-18 04:55:06.262832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.005 [2024-11-18 04:55:06.440270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.264 [2024-11-18 04:55:06.668340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.522 [2024-11-18 04:55:06.837612] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.780 04:55:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.780 04:55:07 -- common/autotest_common.sh@862 -- # return 0 00:14:43.780 04:55:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:44.039 [2024-11-18 04:55:07.377808] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.039 [2024-11-18 04:55:07.377890] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.039 [2024-11-18 04:55:07.377904] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.039 [2024-11-18 04:55:07.377918] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.039 04:55:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.040 04:55:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.298 04:55:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.298 "name": "Existed_Raid", 00:14:44.298 "uuid": "659af8c4-face-4ebe-8a16-55fd85481b03", 00:14:44.298 "strip_size_kb": 64, 00:14:44.298 "state": "configuring", 00:14:44.298 "raid_level": "concat", 00:14:44.298 "superblock": true, 00:14:44.298 "num_base_bdevs": 2, 00:14:44.298 "num_base_bdevs_discovered": 0, 00:14:44.298 "num_base_bdevs_operational": 2, 00:14:44.298 "base_bdevs_list": [ 00:14:44.298 { 00:14:44.298 "name": "BaseBdev1", 00:14:44.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.298 "is_configured": false, 00:14:44.298 "data_offset": 0, 00:14:44.298 "data_size": 0 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "name": "BaseBdev2", 00:14:44.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.298 "is_configured": false, 00:14:44.298 "data_offset": 0, 00:14:44.298 "data_size": 0 00:14:44.298 } 00:14:44.298 ] 00:14:44.298 }' 00:14:44.298 04:55:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.298 04:55:07 -- common/autotest_common.sh@10 -- # set +x 00:14:44.557 04:55:07 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:44.816 [2024-11-18 04:55:08.217845] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.816 [2024-11-18 04:55:08.217910] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:44.816 04:55:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:45.074 [2024-11-18 04:55:08.426009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.074 [2024-11-18 04:55:08.426079] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.074 [2024-11-18 04:55:08.426101] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.074 [2024-11-18 04:55:08.426117] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.074 04:55:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.333 [2024-11-18 04:55:08.668931] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.333 BaseBdev1 00:14:45.333 04:55:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:45.333 04:55:08 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:45.333 04:55:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:45.333 04:55:08 -- common/autotest_common.sh@899 -- # local i 00:14:45.333 04:55:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:45.333 04:55:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:45.333 04:55:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:45.591 04:55:08 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.591 [ 00:14:45.591 { 00:14:45.591 "name": "BaseBdev1", 00:14:45.591 "aliases": [ 00:14:45.591 "1639ee44-a99d-4019-8e62-44eb2fe76753" 00:14:45.591 ], 00:14:45.591 "product_name": "Malloc disk", 00:14:45.591 "block_size": 512, 00:14:45.591 "num_blocks": 65536, 00:14:45.591 "uuid": "1639ee44-a99d-4019-8e62-44eb2fe76753", 00:14:45.591 "assigned_rate_limits": { 00:14:45.591 "rw_ios_per_sec": 0, 00:14:45.591 "rw_mbytes_per_sec": 0, 00:14:45.591 "r_mbytes_per_sec": 0, 00:14:45.591 "w_mbytes_per_sec": 0 00:14:45.591 }, 00:14:45.591 "claimed": true, 00:14:45.591 "claim_type": "exclusive_write", 00:14:45.592 "zoned": false, 00:14:45.592 "supported_io_types": { 00:14:45.592 "read": true, 00:14:45.592 "write": true, 00:14:45.592 "unmap": true, 00:14:45.592 "write_zeroes": true, 00:14:45.592 "flush": true, 00:14:45.592 "reset": true, 00:14:45.592 "compare": false, 00:14:45.592 "compare_and_write": false, 00:14:45.592 "abort": true, 00:14:45.592 "nvme_admin": false, 00:14:45.592 "nvme_io": false 00:14:45.592 }, 00:14:45.592 "memory_domains": [ 00:14:45.592 { 00:14:45.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.592 "dma_device_type": 2 00:14:45.592 } 00:14:45.592 ], 00:14:45.592 "driver_specific": {} 00:14:45.592 } 00:14:45.592 ] 00:14:45.592 04:55:09 -- common/autotest_common.sh@905 -- # return 0 00:14:45.592 04:55:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:45.592 04:55:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:45.592 04:55:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.592 04:55:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:45.592 04:55:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:45.592 04:55:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:45.592 04:55:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.592 04:55:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.592 04:55:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.592 04:55:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.851 04:55:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.851 04:55:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.851 04:55:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.851 "name": "Existed_Raid", 00:14:45.851 "uuid": "22497568-a7a0-4fac-bf96-a9633a908bd9", 00:14:45.851 "strip_size_kb": 64, 00:14:45.851 "state": "configuring", 00:14:45.851 "raid_level": "concat", 00:14:45.851 "superblock": true, 00:14:45.851 "num_base_bdevs": 2, 00:14:45.851 "num_base_bdevs_discovered": 1, 00:14:45.851 "num_base_bdevs_operational": 2, 00:14:45.851 "base_bdevs_list": [ 00:14:45.851 { 00:14:45.851 "name": "BaseBdev1", 00:14:45.851 "uuid": "1639ee44-a99d-4019-8e62-44eb2fe76753", 00:14:45.851 "is_configured": true, 00:14:45.851 "data_offset": 2048, 00:14:45.851 "data_size": 63488 00:14:45.851 }, 00:14:45.851 { 00:14:45.851 "name": "BaseBdev2", 00:14:45.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.851 "is_configured": false, 00:14:45.851 "data_offset": 0, 00:14:45.851 "data_size": 0 00:14:45.851 } 00:14:45.851 ] 00:14:45.851 }' 00:14:45.851 04:55:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.851 04:55:09 -- common/autotest_common.sh@10 -- # set +x 00:14:46.418 04:55:09 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:46.418 [2024-11-18 04:55:09.853299] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:46.418 [2024-11-18 04:55:09.853377] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:46.418 04:55:09 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:46.418 04:55:09 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:46.984 04:55:10 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.984 BaseBdev1 00:14:46.984 04:55:10 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:46.984 04:55:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:46.984 04:55:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.984 04:55:10 -- common/autotest_common.sh@899 -- # local i 00:14:46.984 04:55:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.984 04:55:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.984 04:55:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:47.242 04:55:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.501 [ 00:14:47.501 { 00:14:47.501 "name": "BaseBdev1", 00:14:47.501 "aliases": [ 00:14:47.501 "610ee965-fceb-4b86-a70d-fc645c783378" 00:14:47.501 ], 00:14:47.501 "product_name": "Malloc disk", 00:14:47.501 "block_size": 512, 00:14:47.501 "num_blocks": 65536, 00:14:47.501 "uuid": "610ee965-fceb-4b86-a70d-fc645c783378", 00:14:47.501 "assigned_rate_limits": { 00:14:47.501 "rw_ios_per_sec": 0, 00:14:47.501 "rw_mbytes_per_sec": 0, 00:14:47.501 "r_mbytes_per_sec": 0, 00:14:47.501 "w_mbytes_per_sec": 0 00:14:47.501 }, 00:14:47.501 "claimed": false, 00:14:47.501 "zoned": false, 00:14:47.501 "supported_io_types": { 00:14:47.501 "read": true, 00:14:47.501 "write": true, 00:14:47.501 "unmap": true, 00:14:47.501 "write_zeroes": true, 00:14:47.501 "flush": true, 00:14:47.501 "reset": true, 00:14:47.501 "compare": false, 00:14:47.501 "compare_and_write": false, 00:14:47.501 "abort": true, 00:14:47.501 "nvme_admin": false, 00:14:47.501 "nvme_io": false 00:14:47.501 }, 00:14:47.501 "memory_domains": [ 00:14:47.501 { 00:14:47.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.501 "dma_device_type": 2 00:14:47.501 } 00:14:47.501 ], 00:14:47.501 "driver_specific": {} 00:14:47.501 } 00:14:47.501 ] 00:14:47.501 04:55:10 -- common/autotest_common.sh@905 -- # return 0 00:14:47.501 04:55:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:47.760 [2024-11-18 04:55:11.124777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.760 [2024-11-18 04:55:11.126905] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.760 [2024-11-18 04:55:11.126973] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.760 04:55:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.019 04:55:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.019 "name": "Existed_Raid", 00:14:48.019 "uuid": "5c5a1d59-b001-4699-b0b6-5ab7761e0c80", 00:14:48.019 "strip_size_kb": 64, 00:14:48.019 "state": "configuring", 00:14:48.019 "raid_level": "concat", 00:14:48.019 "superblock": true, 00:14:48.019 "num_base_bdevs": 2, 00:14:48.019 "num_base_bdevs_discovered": 1, 00:14:48.019 "num_base_bdevs_operational": 2, 00:14:48.019 "base_bdevs_list": [ 00:14:48.019 { 00:14:48.019 "name": "BaseBdev1", 00:14:48.019 "uuid": "610ee965-fceb-4b86-a70d-fc645c783378", 00:14:48.019 "is_configured": true, 00:14:48.019 "data_offset": 2048, 00:14:48.019 "data_size": 63488 00:14:48.019 }, 00:14:48.019 { 00:14:48.019 "name": "BaseBdev2", 00:14:48.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.019 "is_configured": false, 00:14:48.019 "data_offset": 0, 00:14:48.019 "data_size": 0 00:14:48.019 } 00:14:48.019 ] 00:14:48.019 }' 00:14:48.019 04:55:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.019 04:55:11 -- common/autotest_common.sh@10 -- # set +x 00:14:48.278 04:55:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.537 [2024-11-18 04:55:11.899027] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.537 [2024-11-18 04:55:11.899346] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:14:48.537 [2024-11-18 04:55:11.899365] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:48.537 [2024-11-18 04:55:11.899514] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:48.537 [2024-11-18 04:55:11.899877] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:14:48.537 [2024-11-18 04:55:11.899913] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:14:48.537 [2024-11-18 04:55:11.900076] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.537 BaseBdev2 00:14:48.537 04:55:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:48.537 04:55:11 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:48.537 04:55:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.537 04:55:11 -- common/autotest_common.sh@899 -- # local i 00:14:48.537 04:55:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.537 04:55:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.537 04:55:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.797 04:55:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.056 [ 00:14:49.056 { 00:14:49.056 "name": "BaseBdev2", 00:14:49.056 "aliases": [ 00:14:49.056 "b176eaca-2746-4c5f-85f2-92bbabbb44fe" 00:14:49.056 ], 00:14:49.056 "product_name": "Malloc disk", 00:14:49.056 "block_size": 512, 00:14:49.056 "num_blocks": 65536, 00:14:49.056 "uuid": "b176eaca-2746-4c5f-85f2-92bbabbb44fe", 00:14:49.056 "assigned_rate_limits": { 00:14:49.056 "rw_ios_per_sec": 0, 00:14:49.056 "rw_mbytes_per_sec": 0, 00:14:49.056 "r_mbytes_per_sec": 0, 00:14:49.056 "w_mbytes_per_sec": 0 00:14:49.056 }, 00:14:49.056 "claimed": true, 00:14:49.056 "claim_type": "exclusive_write", 00:14:49.056 "zoned": false, 00:14:49.056 "supported_io_types": { 00:14:49.056 "read": true, 00:14:49.056 "write": true, 00:14:49.056 "unmap": true, 00:14:49.056 "write_zeroes": true, 00:14:49.056 "flush": true, 00:14:49.056 "reset": true, 00:14:49.056 "compare": false, 00:14:49.056 "compare_and_write": false, 00:14:49.056 "abort": true, 00:14:49.056 "nvme_admin": false, 00:14:49.056 "nvme_io": false 00:14:49.056 }, 00:14:49.057 "memory_domains": [ 00:14:49.057 { 00:14:49.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.057 "dma_device_type": 2 00:14:49.057 } 00:14:49.057 ], 00:14:49.057 "driver_specific": {} 00:14:49.057 } 00:14:49.057 ] 00:14:49.057 04:55:12 -- common/autotest_common.sh@905 -- # return 0 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.057 "name": "Existed_Raid", 00:14:49.057 "uuid": "5c5a1d59-b001-4699-b0b6-5ab7761e0c80", 00:14:49.057 "strip_size_kb": 64, 00:14:49.057 "state": "online", 00:14:49.057 "raid_level": "concat", 00:14:49.057 "superblock": true, 00:14:49.057 "num_base_bdevs": 2, 00:14:49.057 "num_base_bdevs_discovered": 2, 00:14:49.057 "num_base_bdevs_operational": 2, 00:14:49.057 "base_bdevs_list": [ 00:14:49.057 { 00:14:49.057 "name": "BaseBdev1", 00:14:49.057 "uuid": "610ee965-fceb-4b86-a70d-fc645c783378", 00:14:49.057 "is_configured": true, 00:14:49.057 "data_offset": 2048, 00:14:49.057 "data_size": 63488 00:14:49.057 }, 00:14:49.057 { 00:14:49.057 "name": "BaseBdev2", 00:14:49.057 "uuid": "b176eaca-2746-4c5f-85f2-92bbabbb44fe", 00:14:49.057 "is_configured": true, 00:14:49.057 "data_offset": 2048, 00:14:49.057 "data_size": 63488 00:14:49.057 } 00:14:49.057 ] 00:14:49.057 }' 00:14:49.057 04:55:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.057 04:55:12 -- common/autotest_common.sh@10 -- # set +x 00:14:49.626 04:55:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:49.626 [2024-11-18 04:55:13.059601] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.626 [2024-11-18 04:55:13.059643] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.626 [2024-11-18 04:55:13.059724] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.886 "name": "Existed_Raid", 00:14:49.886 "uuid": "5c5a1d59-b001-4699-b0b6-5ab7761e0c80", 00:14:49.886 "strip_size_kb": 64, 00:14:49.886 "state": "offline", 00:14:49.886 "raid_level": "concat", 00:14:49.886 "superblock": true, 00:14:49.886 "num_base_bdevs": 2, 00:14:49.886 "num_base_bdevs_discovered": 1, 00:14:49.886 "num_base_bdevs_operational": 1, 00:14:49.886 "base_bdevs_list": [ 00:14:49.886 { 00:14:49.886 "name": null, 00:14:49.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.886 "is_configured": false, 00:14:49.886 "data_offset": 2048, 00:14:49.886 "data_size": 63488 00:14:49.886 }, 00:14:49.886 { 00:14:49.886 "name": "BaseBdev2", 00:14:49.886 "uuid": "b176eaca-2746-4c5f-85f2-92bbabbb44fe", 00:14:49.886 "is_configured": true, 00:14:49.886 "data_offset": 2048, 00:14:49.886 "data_size": 63488 00:14:49.886 } 00:14:49.886 ] 00:14:49.886 }' 00:14:49.886 04:55:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.886 04:55:13 -- common/autotest_common.sh@10 -- # set +x 00:14:50.453 04:55:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:50.453 04:55:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:50.453 04:55:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.453 04:55:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:50.453 04:55:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:50.453 04:55:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.453 04:55:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:50.712 [2024-11-18 04:55:14.104063] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:50.712 [2024-11-18 04:55:14.104147] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:14:50.712 04:55:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:50.712 04:55:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:50.712 04:55:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.712 04:55:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:50.971 04:55:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:50.971 04:55:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:50.971 04:55:14 -- bdev/bdev_raid.sh@287 -- # killprocess 69990 00:14:50.971 04:55:14 -- common/autotest_common.sh@936 -- # '[' -z 69990 ']' 00:14:50.971 04:55:14 -- common/autotest_common.sh@940 -- # kill -0 69990 00:14:50.971 04:55:14 -- common/autotest_common.sh@941 -- # uname 00:14:50.971 04:55:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.971 04:55:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69990 00:14:50.971 04:55:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:50.971 04:55:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:50.971 killing process with pid 69990 00:14:50.971 04:55:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69990' 00:14:50.971 04:55:14 -- common/autotest_common.sh@955 -- # kill 69990 00:14:50.971 [2024-11-18 04:55:14.483523] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.971 04:55:14 -- common/autotest_common.sh@960 -- # wait 69990 00:14:50.971 [2024-11-18 04:55:14.483649] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:52.351 ************************************ 00:14:52.351 END TEST raid_state_function_test_sb 00:14:52.351 00:14:52.351 real 0m9.371s 00:14:52.351 user 0m15.417s 00:14:52.351 sys 0m1.313s 00:14:52.351 04:55:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.351 04:55:15 -- common/autotest_common.sh@10 -- # set +x 00:14:52.351 ************************************ 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:52.351 04:55:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:52.351 04:55:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.351 04:55:15 -- common/autotest_common.sh@10 -- # set +x 00:14:52.351 ************************************ 00:14:52.351 START TEST raid_superblock_test 00:14:52.351 ************************************ 00:14:52.351 04:55:15 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 2 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=70286 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 70286 /var/tmp/spdk-raid.sock 00:14:52.351 04:55:15 -- common/autotest_common.sh@829 -- # '[' -z 70286 ']' 00:14:52.351 04:55:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:52.351 04:55:15 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:52.351 04:55:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:52.351 04:55:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:52.351 04:55:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.351 04:55:15 -- common/autotest_common.sh@10 -- # set +x 00:14:52.351 [2024-11-18 04:55:15.682377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:52.351 [2024-11-18 04:55:15.682538] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70286 ] 00:14:52.351 [2024-11-18 04:55:15.848639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.610 [2024-11-18 04:55:16.020170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.869 [2024-11-18 04:55:16.197762] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.128 04:55:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.128 04:55:16 -- common/autotest_common.sh@862 -- # return 0 00:14:53.128 04:55:16 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:53.128 04:55:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:53.128 04:55:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:53.128 04:55:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:53.128 04:55:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:53.128 04:55:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.128 04:55:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.128 04:55:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.128 04:55:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:53.387 malloc1 00:14:53.387 04:55:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:53.646 [2024-11-18 04:55:17.036086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:53.646 [2024-11-18 04:55:17.036194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.646 [2024-11-18 04:55:17.036252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:14:53.646 [2024-11-18 04:55:17.036268] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.646 [2024-11-18 04:55:17.038743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.646 [2024-11-18 04:55:17.038791] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:53.646 pt1 00:14:53.646 04:55:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:53.646 04:55:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:53.646 04:55:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:53.646 04:55:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:53.646 04:55:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:53.646 04:55:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.646 04:55:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.646 04:55:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.646 04:55:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:53.906 malloc2 00:14:53.906 04:55:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.165 [2024-11-18 04:55:17.543417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.165 [2024-11-18 04:55:17.543525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.165 [2024-11-18 04:55:17.543561] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:14:54.165 [2024-11-18 04:55:17.543574] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.165 [2024-11-18 04:55:17.546124] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.165 [2024-11-18 04:55:17.546181] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.165 pt2 00:14:54.165 04:55:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:54.165 04:55:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:54.166 04:55:17 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:54.425 [2024-11-18 04:55:17.755492] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:54.425 [2024-11-18 04:55:17.757541] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.425 [2024-11-18 04:55:17.757759] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:14:54.425 [2024-11-18 04:55:17.757776] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:54.425 [2024-11-18 04:55:17.757963] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:54.425 [2024-11-18 04:55:17.758406] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:14:54.425 [2024-11-18 04:55:17.758456] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:14:54.425 [2024-11-18 04:55:17.758647] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.425 04:55:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.684 04:55:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.684 "name": "raid_bdev1", 00:14:54.684 "uuid": "c0348a02-407d-4438-80fc-36f8c9065935", 00:14:54.684 "strip_size_kb": 64, 00:14:54.684 "state": "online", 00:14:54.684 "raid_level": "concat", 00:14:54.684 "superblock": true, 00:14:54.684 "num_base_bdevs": 2, 00:14:54.684 "num_base_bdevs_discovered": 2, 00:14:54.684 "num_base_bdevs_operational": 2, 00:14:54.684 "base_bdevs_list": [ 00:14:54.684 { 00:14:54.684 "name": "pt1", 00:14:54.684 "uuid": "a5ae3523-e0de-5bf9-ab1a-736f380b9a84", 00:14:54.684 "is_configured": true, 00:14:54.684 "data_offset": 2048, 00:14:54.684 "data_size": 63488 00:14:54.684 }, 00:14:54.684 { 00:14:54.684 "name": "pt2", 00:14:54.684 "uuid": "cdae25d8-9b98-58d8-be6e-b290a3c68b0b", 00:14:54.684 "is_configured": true, 00:14:54.684 "data_offset": 2048, 00:14:54.684 "data_size": 63488 00:14:54.684 } 00:14:54.684 ] 00:14:54.684 }' 00:14:54.684 04:55:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.684 04:55:18 -- common/autotest_common.sh@10 -- # set +x 00:14:54.943 04:55:18 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:54.943 04:55:18 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:55.202 [2024-11-18 04:55:18.507914] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.202 04:55:18 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c0348a02-407d-4438-80fc-36f8c9065935 00:14:55.202 04:55:18 -- bdev/bdev_raid.sh@380 -- # '[' -z c0348a02-407d-4438-80fc-36f8c9065935 ']' 00:14:55.202 04:55:18 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:55.460 [2024-11-18 04:55:18.767738] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.460 [2024-11-18 04:55:18.767820] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.460 [2024-11-18 04:55:18.767910] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.460 [2024-11-18 04:55:18.767980] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.460 [2024-11-18 04:55:18.767995] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:14:55.460 04:55:18 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.460 04:55:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:55.719 04:55:19 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:55.719 04:55:19 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:55.719 04:55:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.719 04:55:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:55.979 04:55:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.979 04:55:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:55.979 04:55:19 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:55.979 04:55:19 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:56.239 04:55:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:56.239 04:55:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:56.239 04:55:19 -- common/autotest_common.sh@650 -- # local es=0 00:14:56.239 04:55:19 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:56.239 04:55:19 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.239 04:55:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.239 04:55:19 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.239 04:55:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.239 04:55:19 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.239 04:55:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.239 04:55:19 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.239 04:55:19 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:56.239 04:55:19 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:56.498 [2024-11-18 04:55:19.923984] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:56.498 [2024-11-18 04:55:19.926219] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:56.498 [2024-11-18 04:55:19.926334] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:56.498 [2024-11-18 04:55:19.926417] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:56.498 [2024-11-18 04:55:19.926446] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.498 [2024-11-18 04:55:19.926457] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:14:56.498 request: 00:14:56.498 { 00:14:56.498 "name": "raid_bdev1", 00:14:56.498 "raid_level": "concat", 00:14:56.498 "base_bdevs": [ 00:14:56.498 "malloc1", 00:14:56.498 "malloc2" 00:14:56.498 ], 00:14:56.498 "superblock": false, 00:14:56.498 "strip_size_kb": 64, 00:14:56.498 "method": "bdev_raid_create", 00:14:56.498 "req_id": 1 00:14:56.498 } 00:14:56.498 Got JSON-RPC error response 00:14:56.498 response: 00:14:56.498 { 00:14:56.498 "code": -17, 00:14:56.498 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:56.498 } 00:14:56.498 04:55:19 -- common/autotest_common.sh@653 -- # es=1 00:14:56.498 04:55:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.498 04:55:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.498 04:55:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.498 04:55:19 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.498 04:55:19 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:56.757 04:55:20 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:56.757 04:55:20 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:56.757 04:55:20 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:57.017 [2024-11-18 04:55:20.348049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:57.017 [2024-11-18 04:55:20.348155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.017 [2024-11-18 04:55:20.348189] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:14:57.017 [2024-11-18 04:55:20.348231] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.017 [2024-11-18 04:55:20.350809] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.017 [2024-11-18 04:55:20.350857] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:57.017 [2024-11-18 04:55:20.350970] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:57.017 [2024-11-18 04:55:20.351042] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:57.017 pt1 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.017 04:55:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.276 04:55:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.276 "name": "raid_bdev1", 00:14:57.276 "uuid": "c0348a02-407d-4438-80fc-36f8c9065935", 00:14:57.276 "strip_size_kb": 64, 00:14:57.276 "state": "configuring", 00:14:57.276 "raid_level": "concat", 00:14:57.276 "superblock": true, 00:14:57.276 "num_base_bdevs": 2, 00:14:57.276 "num_base_bdevs_discovered": 1, 00:14:57.276 "num_base_bdevs_operational": 2, 00:14:57.276 "base_bdevs_list": [ 00:14:57.276 { 00:14:57.276 "name": "pt1", 00:14:57.276 "uuid": "a5ae3523-e0de-5bf9-ab1a-736f380b9a84", 00:14:57.276 "is_configured": true, 00:14:57.276 "data_offset": 2048, 00:14:57.276 "data_size": 63488 00:14:57.276 }, 00:14:57.276 { 00:14:57.276 "name": null, 00:14:57.276 "uuid": "cdae25d8-9b98-58d8-be6e-b290a3c68b0b", 00:14:57.276 "is_configured": false, 00:14:57.276 "data_offset": 2048, 00:14:57.276 "data_size": 63488 00:14:57.276 } 00:14:57.276 ] 00:14:57.276 }' 00:14:57.276 04:55:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.276 04:55:20 -- common/autotest_common.sh@10 -- # set +x 00:14:57.533 04:55:20 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:57.533 04:55:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:57.533 04:55:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:57.533 04:55:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:57.791 [2024-11-18 04:55:21.144213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:57.791 [2024-11-18 04:55:21.144366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.791 [2024-11-18 04:55:21.144441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:14:57.791 [2024-11-18 04:55:21.144458] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.791 [2024-11-18 04:55:21.144972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.791 [2024-11-18 04:55:21.145009] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:57.791 [2024-11-18 04:55:21.145131] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:57.791 [2024-11-18 04:55:21.145190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:57.791 [2024-11-18 04:55:21.145372] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:14:57.791 [2024-11-18 04:55:21.145404] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:57.791 [2024-11-18 04:55:21.145550] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:57.791 [2024-11-18 04:55:21.145915] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:14:57.791 [2024-11-18 04:55:21.145960] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:14:57.791 [2024-11-18 04:55:21.146145] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.791 pt2 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.791 04:55:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.049 04:55:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:58.049 "name": "raid_bdev1", 00:14:58.049 "uuid": "c0348a02-407d-4438-80fc-36f8c9065935", 00:14:58.049 "strip_size_kb": 64, 00:14:58.049 "state": "online", 00:14:58.049 "raid_level": "concat", 00:14:58.049 "superblock": true, 00:14:58.049 "num_base_bdevs": 2, 00:14:58.049 "num_base_bdevs_discovered": 2, 00:14:58.049 "num_base_bdevs_operational": 2, 00:14:58.049 "base_bdevs_list": [ 00:14:58.049 { 00:14:58.050 "name": "pt1", 00:14:58.050 "uuid": "a5ae3523-e0de-5bf9-ab1a-736f380b9a84", 00:14:58.050 "is_configured": true, 00:14:58.050 "data_offset": 2048, 00:14:58.050 "data_size": 63488 00:14:58.050 }, 00:14:58.050 { 00:14:58.050 "name": "pt2", 00:14:58.050 "uuid": "cdae25d8-9b98-58d8-be6e-b290a3c68b0b", 00:14:58.050 "is_configured": true, 00:14:58.050 "data_offset": 2048, 00:14:58.050 "data_size": 63488 00:14:58.050 } 00:14:58.050 ] 00:14:58.050 }' 00:14:58.050 04:55:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:58.050 04:55:21 -- common/autotest_common.sh@10 -- # set +x 00:14:58.307 04:55:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:58.307 04:55:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:58.565 [2024-11-18 04:55:21.884625] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.565 04:55:21 -- bdev/bdev_raid.sh@430 -- # '[' c0348a02-407d-4438-80fc-36f8c9065935 '!=' c0348a02-407d-4438-80fc-36f8c9065935 ']' 00:14:58.565 04:55:21 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:14:58.565 04:55:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:58.565 04:55:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:58.565 04:55:21 -- bdev/bdev_raid.sh@511 -- # killprocess 70286 00:14:58.565 04:55:21 -- common/autotest_common.sh@936 -- # '[' -z 70286 ']' 00:14:58.565 04:55:21 -- common/autotest_common.sh@940 -- # kill -0 70286 00:14:58.565 04:55:21 -- common/autotest_common.sh@941 -- # uname 00:14:58.565 04:55:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.565 04:55:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70286 00:14:58.565 04:55:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:58.565 04:55:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:58.565 killing process with pid 70286 00:14:58.565 04:55:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70286' 00:14:58.565 04:55:21 -- common/autotest_common.sh@955 -- # kill 70286 00:14:58.565 [2024-11-18 04:55:21.933239] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.565 [2024-11-18 04:55:21.933336] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.565 04:55:21 -- common/autotest_common.sh@960 -- # wait 70286 00:14:58.565 [2024-11-18 04:55:21.933395] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.565 [2024-11-18 04:55:21.933417] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:14:58.565 [2024-11-18 04:55:22.087401] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:59.939 00:14:59.939 real 0m7.524s 00:14:59.939 user 0m12.062s 00:14:59.939 sys 0m1.058s 00:14:59.939 04:55:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:59.939 04:55:23 -- common/autotest_common.sh@10 -- # set +x 00:14:59.939 ************************************ 00:14:59.939 END TEST raid_superblock_test 00:14:59.939 ************************************ 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:59.939 04:55:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:59.939 04:55:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.939 04:55:23 -- common/autotest_common.sh@10 -- # set +x 00:14:59.939 ************************************ 00:14:59.939 START TEST raid_state_function_test 00:14:59.939 ************************************ 00:14:59.939 04:55:23 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 false 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=70509 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:59.939 Process raid pid: 70509 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 70509' 00:14:59.939 04:55:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 70509 /var/tmp/spdk-raid.sock 00:14:59.939 04:55:23 -- common/autotest_common.sh@829 -- # '[' -z 70509 ']' 00:14:59.939 04:55:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:59.939 04:55:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:59.940 04:55:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:59.940 04:55:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.940 04:55:23 -- common/autotest_common.sh@10 -- # set +x 00:14:59.940 [2024-11-18 04:55:23.263488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:59.940 [2024-11-18 04:55:23.263683] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.940 [2024-11-18 04:55:23.435002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.198 [2024-11-18 04:55:23.613221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.457 [2024-11-18 04:55:23.780556] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.716 04:55:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.716 04:55:24 -- common/autotest_common.sh@862 -- # return 0 00:15:00.716 04:55:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:00.974 [2024-11-18 04:55:24.434133] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.974 [2024-11-18 04:55:24.434265] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.974 [2024-11-18 04:55:24.434289] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.974 [2024-11-18 04:55:24.434308] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.974 04:55:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:00.974 04:55:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:00.974 04:55:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:00.974 04:55:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:00.974 04:55:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:00.974 04:55:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:00.974 04:55:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:00.975 04:55:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:00.975 04:55:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:00.975 04:55:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:00.975 04:55:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.975 04:55:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.234 04:55:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.234 "name": "Existed_Raid", 00:15:01.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.234 "strip_size_kb": 0, 00:15:01.234 "state": "configuring", 00:15:01.234 "raid_level": "raid1", 00:15:01.234 "superblock": false, 00:15:01.234 "num_base_bdevs": 2, 00:15:01.234 "num_base_bdevs_discovered": 0, 00:15:01.234 "num_base_bdevs_operational": 2, 00:15:01.234 "base_bdevs_list": [ 00:15:01.234 { 00:15:01.234 "name": "BaseBdev1", 00:15:01.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.234 "is_configured": false, 00:15:01.234 "data_offset": 0, 00:15:01.234 "data_size": 0 00:15:01.234 }, 00:15:01.234 { 00:15:01.234 "name": "BaseBdev2", 00:15:01.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.234 "is_configured": false, 00:15:01.234 "data_offset": 0, 00:15:01.234 "data_size": 0 00:15:01.234 } 00:15:01.234 ] 00:15:01.234 }' 00:15:01.234 04:55:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.234 04:55:24 -- common/autotest_common.sh@10 -- # set +x 00:15:01.493 04:55:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:01.752 [2024-11-18 04:55:25.190235] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.752 [2024-11-18 04:55:25.190319] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:01.752 04:55:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:02.011 [2024-11-18 04:55:25.402346] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:02.011 [2024-11-18 04:55:25.402439] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:02.011 [2024-11-18 04:55:25.402476] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.011 [2024-11-18 04:55:25.402494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.011 04:55:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:02.271 [2024-11-18 04:55:25.680971] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.271 BaseBdev1 00:15:02.271 04:55:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:02.271 04:55:25 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:02.271 04:55:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:02.271 04:55:25 -- common/autotest_common.sh@899 -- # local i 00:15:02.271 04:55:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:02.271 04:55:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:02.271 04:55:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:02.529 04:55:25 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:02.788 [ 00:15:02.788 { 00:15:02.788 "name": "BaseBdev1", 00:15:02.788 "aliases": [ 00:15:02.788 "602b9eb2-f469-492f-ae16-1325fa6e59d6" 00:15:02.788 ], 00:15:02.789 "product_name": "Malloc disk", 00:15:02.789 "block_size": 512, 00:15:02.789 "num_blocks": 65536, 00:15:02.789 "uuid": "602b9eb2-f469-492f-ae16-1325fa6e59d6", 00:15:02.789 "assigned_rate_limits": { 00:15:02.789 "rw_ios_per_sec": 0, 00:15:02.789 "rw_mbytes_per_sec": 0, 00:15:02.789 "r_mbytes_per_sec": 0, 00:15:02.789 "w_mbytes_per_sec": 0 00:15:02.789 }, 00:15:02.789 "claimed": true, 00:15:02.789 "claim_type": "exclusive_write", 00:15:02.789 "zoned": false, 00:15:02.789 "supported_io_types": { 00:15:02.789 "read": true, 00:15:02.789 "write": true, 00:15:02.789 "unmap": true, 00:15:02.789 "write_zeroes": true, 00:15:02.789 "flush": true, 00:15:02.789 "reset": true, 00:15:02.789 "compare": false, 00:15:02.789 "compare_and_write": false, 00:15:02.789 "abort": true, 00:15:02.789 "nvme_admin": false, 00:15:02.789 "nvme_io": false 00:15:02.789 }, 00:15:02.789 "memory_domains": [ 00:15:02.789 { 00:15:02.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.789 "dma_device_type": 2 00:15:02.789 } 00:15:02.789 ], 00:15:02.789 "driver_specific": {} 00:15:02.789 } 00:15:02.789 ] 00:15:02.789 04:55:26 -- common/autotest_common.sh@905 -- # return 0 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.789 04:55:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.048 04:55:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.048 "name": "Existed_Raid", 00:15:03.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.048 "strip_size_kb": 0, 00:15:03.048 "state": "configuring", 00:15:03.048 "raid_level": "raid1", 00:15:03.048 "superblock": false, 00:15:03.048 "num_base_bdevs": 2, 00:15:03.048 "num_base_bdevs_discovered": 1, 00:15:03.048 "num_base_bdevs_operational": 2, 00:15:03.048 "base_bdevs_list": [ 00:15:03.048 { 00:15:03.048 "name": "BaseBdev1", 00:15:03.048 "uuid": "602b9eb2-f469-492f-ae16-1325fa6e59d6", 00:15:03.048 "is_configured": true, 00:15:03.048 "data_offset": 0, 00:15:03.048 "data_size": 65536 00:15:03.048 }, 00:15:03.048 { 00:15:03.048 "name": "BaseBdev2", 00:15:03.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.048 "is_configured": false, 00:15:03.048 "data_offset": 0, 00:15:03.048 "data_size": 0 00:15:03.048 } 00:15:03.048 ] 00:15:03.048 }' 00:15:03.048 04:55:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.048 04:55:26 -- common/autotest_common.sh@10 -- # set +x 00:15:03.307 04:55:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:03.566 [2024-11-18 04:55:26.845329] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.566 [2024-11-18 04:55:26.845408] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:03.566 04:55:26 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:03.566 04:55:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:03.566 [2024-11-18 04:55:27.057489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.566 [2024-11-18 04:55:27.059660] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.566 [2024-11-18 04:55:27.059733] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.566 04:55:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.825 04:55:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.825 "name": "Existed_Raid", 00:15:03.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.825 "strip_size_kb": 0, 00:15:03.825 "state": "configuring", 00:15:03.825 "raid_level": "raid1", 00:15:03.825 "superblock": false, 00:15:03.825 "num_base_bdevs": 2, 00:15:03.825 "num_base_bdevs_discovered": 1, 00:15:03.825 "num_base_bdevs_operational": 2, 00:15:03.825 "base_bdevs_list": [ 00:15:03.825 { 00:15:03.825 "name": "BaseBdev1", 00:15:03.825 "uuid": "602b9eb2-f469-492f-ae16-1325fa6e59d6", 00:15:03.825 "is_configured": true, 00:15:03.825 "data_offset": 0, 00:15:03.825 "data_size": 65536 00:15:03.825 }, 00:15:03.825 { 00:15:03.825 "name": "BaseBdev2", 00:15:03.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.825 "is_configured": false, 00:15:03.825 "data_offset": 0, 00:15:03.825 "data_size": 0 00:15:03.825 } 00:15:03.825 ] 00:15:03.825 }' 00:15:03.825 04:55:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.825 04:55:27 -- common/autotest_common.sh@10 -- # set +x 00:15:04.083 04:55:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:04.342 [2024-11-18 04:55:27.833687] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.342 [2024-11-18 04:55:27.833766] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:15:04.343 [2024-11-18 04:55:27.833779] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:04.343 [2024-11-18 04:55:27.833891] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:15:04.343 [2024-11-18 04:55:27.834339] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:15:04.343 [2024-11-18 04:55:27.834372] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:15:04.343 [2024-11-18 04:55:27.834665] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.343 BaseBdev2 00:15:04.343 04:55:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:04.343 04:55:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:04.343 04:55:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:04.343 04:55:27 -- common/autotest_common.sh@899 -- # local i 00:15:04.343 04:55:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:04.343 04:55:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:04.343 04:55:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:04.602 04:55:28 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:04.862 [ 00:15:04.862 { 00:15:04.862 "name": "BaseBdev2", 00:15:04.862 "aliases": [ 00:15:04.862 "235f4ea4-7242-42fb-88a8-4335d5e0a53f" 00:15:04.862 ], 00:15:04.862 "product_name": "Malloc disk", 00:15:04.862 "block_size": 512, 00:15:04.862 "num_blocks": 65536, 00:15:04.862 "uuid": "235f4ea4-7242-42fb-88a8-4335d5e0a53f", 00:15:04.862 "assigned_rate_limits": { 00:15:04.862 "rw_ios_per_sec": 0, 00:15:04.862 "rw_mbytes_per_sec": 0, 00:15:04.862 "r_mbytes_per_sec": 0, 00:15:04.862 "w_mbytes_per_sec": 0 00:15:04.862 }, 00:15:04.862 "claimed": true, 00:15:04.862 "claim_type": "exclusive_write", 00:15:04.862 "zoned": false, 00:15:04.862 "supported_io_types": { 00:15:04.862 "read": true, 00:15:04.862 "write": true, 00:15:04.862 "unmap": true, 00:15:04.862 "write_zeroes": true, 00:15:04.862 "flush": true, 00:15:04.862 "reset": true, 00:15:04.862 "compare": false, 00:15:04.862 "compare_and_write": false, 00:15:04.862 "abort": true, 00:15:04.862 "nvme_admin": false, 00:15:04.862 "nvme_io": false 00:15:04.862 }, 00:15:04.862 "memory_domains": [ 00:15:04.862 { 00:15:04.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.862 "dma_device_type": 2 00:15:04.862 } 00:15:04.862 ], 00:15:04.862 "driver_specific": {} 00:15:04.862 } 00:15:04.862 ] 00:15:04.862 04:55:28 -- common/autotest_common.sh@905 -- # return 0 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.862 04:55:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.122 04:55:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:05.122 "name": "Existed_Raid", 00:15:05.122 "uuid": "63884729-8334-4d6e-ae59-de26bee9c087", 00:15:05.122 "strip_size_kb": 0, 00:15:05.122 "state": "online", 00:15:05.122 "raid_level": "raid1", 00:15:05.122 "superblock": false, 00:15:05.122 "num_base_bdevs": 2, 00:15:05.122 "num_base_bdevs_discovered": 2, 00:15:05.122 "num_base_bdevs_operational": 2, 00:15:05.122 "base_bdevs_list": [ 00:15:05.122 { 00:15:05.122 "name": "BaseBdev1", 00:15:05.122 "uuid": "602b9eb2-f469-492f-ae16-1325fa6e59d6", 00:15:05.122 "is_configured": true, 00:15:05.122 "data_offset": 0, 00:15:05.122 "data_size": 65536 00:15:05.122 }, 00:15:05.122 { 00:15:05.122 "name": "BaseBdev2", 00:15:05.122 "uuid": "235f4ea4-7242-42fb-88a8-4335d5e0a53f", 00:15:05.122 "is_configured": true, 00:15:05.122 "data_offset": 0, 00:15:05.122 "data_size": 65536 00:15:05.122 } 00:15:05.122 ] 00:15:05.122 }' 00:15:05.122 04:55:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:05.122 04:55:28 -- common/autotest_common.sh@10 -- # set +x 00:15:05.381 04:55:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:05.641 [2024-11-18 04:55:29.018111] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.641 04:55:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.911 04:55:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:05.911 "name": "Existed_Raid", 00:15:05.911 "uuid": "63884729-8334-4d6e-ae59-de26bee9c087", 00:15:05.911 "strip_size_kb": 0, 00:15:05.911 "state": "online", 00:15:05.911 "raid_level": "raid1", 00:15:05.911 "superblock": false, 00:15:05.911 "num_base_bdevs": 2, 00:15:05.911 "num_base_bdevs_discovered": 1, 00:15:05.911 "num_base_bdevs_operational": 1, 00:15:05.911 "base_bdevs_list": [ 00:15:05.911 { 00:15:05.911 "name": null, 00:15:05.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.911 "is_configured": false, 00:15:05.911 "data_offset": 0, 00:15:05.911 "data_size": 65536 00:15:05.911 }, 00:15:05.911 { 00:15:05.911 "name": "BaseBdev2", 00:15:05.911 "uuid": "235f4ea4-7242-42fb-88a8-4335d5e0a53f", 00:15:05.911 "is_configured": true, 00:15:05.911 "data_offset": 0, 00:15:05.911 "data_size": 65536 00:15:05.911 } 00:15:05.911 ] 00:15:05.911 }' 00:15:05.911 04:55:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:05.911 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:06.168 04:55:29 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:06.168 04:55:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:06.168 04:55:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.168 04:55:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:06.427 04:55:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:06.427 04:55:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:06.427 04:55:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:06.686 [2024-11-18 04:55:30.141702] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:06.686 [2024-11-18 04:55:30.141774] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.686 [2024-11-18 04:55:30.141847] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.945 [2024-11-18 04:55:30.214801] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.945 [2024-11-18 04:55:30.214849] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:15:06.945 04:55:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:06.945 04:55:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:06.945 04:55:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.945 04:55:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:06.945 04:55:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:06.945 04:55:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:06.946 04:55:30 -- bdev/bdev_raid.sh@287 -- # killprocess 70509 00:15:06.946 04:55:30 -- common/autotest_common.sh@936 -- # '[' -z 70509 ']' 00:15:06.946 04:55:30 -- common/autotest_common.sh@940 -- # kill -0 70509 00:15:06.946 04:55:30 -- common/autotest_common.sh@941 -- # uname 00:15:06.946 04:55:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:06.946 04:55:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70509 00:15:07.205 04:55:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:07.205 04:55:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:07.205 04:55:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70509' 00:15:07.205 killing process with pid 70509 00:15:07.205 04:55:30 -- common/autotest_common.sh@955 -- # kill 70509 00:15:07.205 04:55:30 -- common/autotest_common.sh@960 -- # wait 70509 00:15:07.205 [2024-11-18 04:55:30.484991] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.205 [2024-11-18 04:55:30.485123] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:08.142 00:15:08.142 real 0m8.367s 00:15:08.142 user 0m13.661s 00:15:08.142 sys 0m1.172s 00:15:08.142 04:55:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:08.142 ************************************ 00:15:08.142 END TEST raid_state_function_test 00:15:08.142 ************************************ 00:15:08.142 04:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:08.142 04:55:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:08.142 04:55:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:08.142 04:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:08.142 ************************************ 00:15:08.142 START TEST raid_state_function_test_sb 00:15:08.142 ************************************ 00:15:08.142 04:55:31 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 true 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:08.142 04:55:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=70796 00:15:08.143 Process raid pid: 70796 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 70796' 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 70796 /var/tmp/spdk-raid.sock 00:15:08.143 04:55:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:08.143 04:55:31 -- common/autotest_common.sh@829 -- # '[' -z 70796 ']' 00:15:08.143 04:55:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:08.143 04:55:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:08.143 04:55:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:08.143 04:55:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.143 04:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:08.401 [2024-11-18 04:55:31.684638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:08.401 [2024-11-18 04:55:31.684807] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.401 [2024-11-18 04:55:31.857766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.661 [2024-11-18 04:55:32.032418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.920 [2024-11-18 04:55:32.207843] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.179 04:55:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.179 04:55:32 -- common/autotest_common.sh@862 -- # return 0 00:15:09.179 04:55:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:09.438 [2024-11-18 04:55:32.837751] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.438 [2024-11-18 04:55:32.837847] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.438 [2024-11-18 04:55:32.837863] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.438 [2024-11-18 04:55:32.837880] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.438 04:55:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.697 04:55:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.697 "name": "Existed_Raid", 00:15:09.697 "uuid": "b08745fb-1fc7-4924-8043-376a48cd81bf", 00:15:09.697 "strip_size_kb": 0, 00:15:09.697 "state": "configuring", 00:15:09.697 "raid_level": "raid1", 00:15:09.697 "superblock": true, 00:15:09.697 "num_base_bdevs": 2, 00:15:09.697 "num_base_bdevs_discovered": 0, 00:15:09.697 "num_base_bdevs_operational": 2, 00:15:09.697 "base_bdevs_list": [ 00:15:09.697 { 00:15:09.697 "name": "BaseBdev1", 00:15:09.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.697 "is_configured": false, 00:15:09.697 "data_offset": 0, 00:15:09.697 "data_size": 0 00:15:09.697 }, 00:15:09.697 { 00:15:09.697 "name": "BaseBdev2", 00:15:09.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.697 "is_configured": false, 00:15:09.697 "data_offset": 0, 00:15:09.697 "data_size": 0 00:15:09.697 } 00:15:09.697 ] 00:15:09.697 }' 00:15:09.697 04:55:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.697 04:55:33 -- common/autotest_common.sh@10 -- # set +x 00:15:09.956 04:55:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:10.215 [2024-11-18 04:55:33.617813] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.215 [2024-11-18 04:55:33.617879] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:10.215 04:55:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:10.474 [2024-11-18 04:55:33.817882] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.474 [2024-11-18 04:55:33.817964] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.474 [2024-11-18 04:55:33.817988] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.474 [2024-11-18 04:55:33.818005] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.474 04:55:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.733 [2024-11-18 04:55:34.054235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.733 BaseBdev1 00:15:10.733 04:55:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:10.733 04:55:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:10.733 04:55:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.733 04:55:34 -- common/autotest_common.sh@899 -- # local i 00:15:10.733 04:55:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.733 04:55:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.733 04:55:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:10.992 04:55:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:11.252 [ 00:15:11.252 { 00:15:11.252 "name": "BaseBdev1", 00:15:11.252 "aliases": [ 00:15:11.252 "209bbe2e-f83e-4ad0-8774-644b0621c2be" 00:15:11.252 ], 00:15:11.252 "product_name": "Malloc disk", 00:15:11.252 "block_size": 512, 00:15:11.252 "num_blocks": 65536, 00:15:11.252 "uuid": "209bbe2e-f83e-4ad0-8774-644b0621c2be", 00:15:11.252 "assigned_rate_limits": { 00:15:11.252 "rw_ios_per_sec": 0, 00:15:11.252 "rw_mbytes_per_sec": 0, 00:15:11.252 "r_mbytes_per_sec": 0, 00:15:11.252 "w_mbytes_per_sec": 0 00:15:11.252 }, 00:15:11.252 "claimed": true, 00:15:11.252 "claim_type": "exclusive_write", 00:15:11.252 "zoned": false, 00:15:11.252 "supported_io_types": { 00:15:11.252 "read": true, 00:15:11.252 "write": true, 00:15:11.252 "unmap": true, 00:15:11.252 "write_zeroes": true, 00:15:11.252 "flush": true, 00:15:11.252 "reset": true, 00:15:11.252 "compare": false, 00:15:11.252 "compare_and_write": false, 00:15:11.252 "abort": true, 00:15:11.252 "nvme_admin": false, 00:15:11.252 "nvme_io": false 00:15:11.252 }, 00:15:11.252 "memory_domains": [ 00:15:11.252 { 00:15:11.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.252 "dma_device_type": 2 00:15:11.252 } 00:15:11.252 ], 00:15:11.252 "driver_specific": {} 00:15:11.252 } 00:15:11.252 ] 00:15:11.252 04:55:34 -- common/autotest_common.sh@905 -- # return 0 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.252 "name": "Existed_Raid", 00:15:11.252 "uuid": "9c81e333-54b5-48c2-93f4-0fbe470d708a", 00:15:11.252 "strip_size_kb": 0, 00:15:11.252 "state": "configuring", 00:15:11.252 "raid_level": "raid1", 00:15:11.252 "superblock": true, 00:15:11.252 "num_base_bdevs": 2, 00:15:11.252 "num_base_bdevs_discovered": 1, 00:15:11.252 "num_base_bdevs_operational": 2, 00:15:11.252 "base_bdevs_list": [ 00:15:11.252 { 00:15:11.252 "name": "BaseBdev1", 00:15:11.252 "uuid": "209bbe2e-f83e-4ad0-8774-644b0621c2be", 00:15:11.252 "is_configured": true, 00:15:11.252 "data_offset": 2048, 00:15:11.252 "data_size": 63488 00:15:11.252 }, 00:15:11.252 { 00:15:11.252 "name": "BaseBdev2", 00:15:11.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.252 "is_configured": false, 00:15:11.252 "data_offset": 0, 00:15:11.252 "data_size": 0 00:15:11.252 } 00:15:11.252 ] 00:15:11.252 }' 00:15:11.252 04:55:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.252 04:55:34 -- common/autotest_common.sh@10 -- # set +x 00:15:11.833 04:55:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:11.833 [2024-11-18 04:55:35.270672] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.833 [2024-11-18 04:55:35.270770] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:11.833 04:55:35 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:11.833 04:55:35 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:12.124 04:55:35 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.396 BaseBdev1 00:15:12.396 04:55:35 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:12.396 04:55:35 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:12.396 04:55:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.396 04:55:35 -- common/autotest_common.sh@899 -- # local i 00:15:12.396 04:55:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.396 04:55:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.396 04:55:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.655 04:55:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.913 [ 00:15:12.913 { 00:15:12.913 "name": "BaseBdev1", 00:15:12.913 "aliases": [ 00:15:12.913 "7f40c165-afc2-4574-8cab-ebd0cf22b9c9" 00:15:12.913 ], 00:15:12.913 "product_name": "Malloc disk", 00:15:12.913 "block_size": 512, 00:15:12.913 "num_blocks": 65536, 00:15:12.913 "uuid": "7f40c165-afc2-4574-8cab-ebd0cf22b9c9", 00:15:12.913 "assigned_rate_limits": { 00:15:12.913 "rw_ios_per_sec": 0, 00:15:12.913 "rw_mbytes_per_sec": 0, 00:15:12.913 "r_mbytes_per_sec": 0, 00:15:12.913 "w_mbytes_per_sec": 0 00:15:12.913 }, 00:15:12.913 "claimed": false, 00:15:12.913 "zoned": false, 00:15:12.913 "supported_io_types": { 00:15:12.913 "read": true, 00:15:12.913 "write": true, 00:15:12.913 "unmap": true, 00:15:12.913 "write_zeroes": true, 00:15:12.913 "flush": true, 00:15:12.913 "reset": true, 00:15:12.913 "compare": false, 00:15:12.913 "compare_and_write": false, 00:15:12.913 "abort": true, 00:15:12.913 "nvme_admin": false, 00:15:12.913 "nvme_io": false 00:15:12.913 }, 00:15:12.913 "memory_domains": [ 00:15:12.913 { 00:15:12.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.913 "dma_device_type": 2 00:15:12.913 } 00:15:12.913 ], 00:15:12.913 "driver_specific": {} 00:15:12.913 } 00:15:12.913 ] 00:15:12.913 04:55:36 -- common/autotest_common.sh@905 -- # return 0 00:15:12.913 04:55:36 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:13.173 [2024-11-18 04:55:36.447930] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.173 [2024-11-18 04:55:36.450137] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.173 [2024-11-18 04:55:36.450232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.173 "name": "Existed_Raid", 00:15:13.173 "uuid": "c59c73dc-1787-413c-a82a-c955ec92618e", 00:15:13.173 "strip_size_kb": 0, 00:15:13.173 "state": "configuring", 00:15:13.173 "raid_level": "raid1", 00:15:13.173 "superblock": true, 00:15:13.173 "num_base_bdevs": 2, 00:15:13.173 "num_base_bdevs_discovered": 1, 00:15:13.173 "num_base_bdevs_operational": 2, 00:15:13.173 "base_bdevs_list": [ 00:15:13.173 { 00:15:13.173 "name": "BaseBdev1", 00:15:13.173 "uuid": "7f40c165-afc2-4574-8cab-ebd0cf22b9c9", 00:15:13.173 "is_configured": true, 00:15:13.173 "data_offset": 2048, 00:15:13.173 "data_size": 63488 00:15:13.173 }, 00:15:13.173 { 00:15:13.173 "name": "BaseBdev2", 00:15:13.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.173 "is_configured": false, 00:15:13.173 "data_offset": 0, 00:15:13.173 "data_size": 0 00:15:13.173 } 00:15:13.173 ] 00:15:13.173 }' 00:15:13.173 04:55:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.173 04:55:36 -- common/autotest_common.sh@10 -- # set +x 00:15:13.741 04:55:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:14.001 [2024-11-18 04:55:37.291944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.001 [2024-11-18 04:55:37.292261] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:15:14.001 [2024-11-18 04:55:37.292280] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:14.001 [2024-11-18 04:55:37.292420] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:14.001 [2024-11-18 04:55:37.292763] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:15:14.001 [2024-11-18 04:55:37.292795] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:15:14.001 [2024-11-18 04:55:37.292939] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.001 BaseBdev2 00:15:14.001 04:55:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:14.001 04:55:37 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:14.001 04:55:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:14.001 04:55:37 -- common/autotest_common.sh@899 -- # local i 00:15:14.001 04:55:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:14.001 04:55:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:14.001 04:55:37 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.261 04:55:37 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:14.519 [ 00:15:14.519 { 00:15:14.519 "name": "BaseBdev2", 00:15:14.519 "aliases": [ 00:15:14.519 "524cbe88-a811-48e8-8005-be10d6743eb0" 00:15:14.519 ], 00:15:14.519 "product_name": "Malloc disk", 00:15:14.519 "block_size": 512, 00:15:14.519 "num_blocks": 65536, 00:15:14.519 "uuid": "524cbe88-a811-48e8-8005-be10d6743eb0", 00:15:14.519 "assigned_rate_limits": { 00:15:14.519 "rw_ios_per_sec": 0, 00:15:14.519 "rw_mbytes_per_sec": 0, 00:15:14.519 "r_mbytes_per_sec": 0, 00:15:14.519 "w_mbytes_per_sec": 0 00:15:14.519 }, 00:15:14.519 "claimed": true, 00:15:14.519 "claim_type": "exclusive_write", 00:15:14.519 "zoned": false, 00:15:14.519 "supported_io_types": { 00:15:14.519 "read": true, 00:15:14.519 "write": true, 00:15:14.519 "unmap": true, 00:15:14.519 "write_zeroes": true, 00:15:14.519 "flush": true, 00:15:14.519 "reset": true, 00:15:14.519 "compare": false, 00:15:14.519 "compare_and_write": false, 00:15:14.519 "abort": true, 00:15:14.519 "nvme_admin": false, 00:15:14.519 "nvme_io": false 00:15:14.519 }, 00:15:14.519 "memory_domains": [ 00:15:14.519 { 00:15:14.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.519 "dma_device_type": 2 00:15:14.519 } 00:15:14.519 ], 00:15:14.519 "driver_specific": {} 00:15:14.519 } 00:15:14.519 ] 00:15:14.519 04:55:37 -- common/autotest_common.sh@905 -- # return 0 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.519 04:55:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.778 04:55:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.778 "name": "Existed_Raid", 00:15:14.778 "uuid": "c59c73dc-1787-413c-a82a-c955ec92618e", 00:15:14.778 "strip_size_kb": 0, 00:15:14.778 "state": "online", 00:15:14.778 "raid_level": "raid1", 00:15:14.778 "superblock": true, 00:15:14.778 "num_base_bdevs": 2, 00:15:14.778 "num_base_bdevs_discovered": 2, 00:15:14.778 "num_base_bdevs_operational": 2, 00:15:14.778 "base_bdevs_list": [ 00:15:14.778 { 00:15:14.778 "name": "BaseBdev1", 00:15:14.778 "uuid": "7f40c165-afc2-4574-8cab-ebd0cf22b9c9", 00:15:14.778 "is_configured": true, 00:15:14.778 "data_offset": 2048, 00:15:14.778 "data_size": 63488 00:15:14.778 }, 00:15:14.778 { 00:15:14.778 "name": "BaseBdev2", 00:15:14.778 "uuid": "524cbe88-a811-48e8-8005-be10d6743eb0", 00:15:14.778 "is_configured": true, 00:15:14.778 "data_offset": 2048, 00:15:14.778 "data_size": 63488 00:15:14.778 } 00:15:14.778 ] 00:15:14.778 }' 00:15:14.778 04:55:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.778 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:15:15.038 04:55:38 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:15.038 [2024-11-18 04:55:38.532519] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.297 04:55:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.554 04:55:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.554 "name": "Existed_Raid", 00:15:15.554 "uuid": "c59c73dc-1787-413c-a82a-c955ec92618e", 00:15:15.554 "strip_size_kb": 0, 00:15:15.554 "state": "online", 00:15:15.554 "raid_level": "raid1", 00:15:15.554 "superblock": true, 00:15:15.554 "num_base_bdevs": 2, 00:15:15.554 "num_base_bdevs_discovered": 1, 00:15:15.554 "num_base_bdevs_operational": 1, 00:15:15.554 "base_bdevs_list": [ 00:15:15.554 { 00:15:15.554 "name": null, 00:15:15.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.554 "is_configured": false, 00:15:15.554 "data_offset": 2048, 00:15:15.554 "data_size": 63488 00:15:15.554 }, 00:15:15.554 { 00:15:15.554 "name": "BaseBdev2", 00:15:15.554 "uuid": "524cbe88-a811-48e8-8005-be10d6743eb0", 00:15:15.554 "is_configured": true, 00:15:15.554 "data_offset": 2048, 00:15:15.554 "data_size": 63488 00:15:15.554 } 00:15:15.554 ] 00:15:15.554 }' 00:15:15.554 04:55:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.554 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:15:15.826 04:55:39 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:15.827 04:55:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:15.827 04:55:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.827 04:55:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:16.088 04:55:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:16.088 04:55:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.088 04:55:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:16.346 [2024-11-18 04:55:39.687402] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.346 [2024-11-18 04:55:39.687480] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.346 [2024-11-18 04:55:39.687563] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.346 [2024-11-18 04:55:39.760218] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.347 [2024-11-18 04:55:39.760280] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:15:16.347 04:55:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:16.347 04:55:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:16.347 04:55:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.347 04:55:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:16.605 04:55:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:16.605 04:55:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:16.605 04:55:40 -- bdev/bdev_raid.sh@287 -- # killprocess 70796 00:15:16.605 04:55:40 -- common/autotest_common.sh@936 -- # '[' -z 70796 ']' 00:15:16.605 04:55:40 -- common/autotest_common.sh@940 -- # kill -0 70796 00:15:16.605 04:55:40 -- common/autotest_common.sh@941 -- # uname 00:15:16.605 04:55:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.605 04:55:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70796 00:15:16.605 04:55:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:16.605 04:55:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:16.605 killing process with pid 70796 00:15:16.605 04:55:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70796' 00:15:16.605 04:55:40 -- common/autotest_common.sh@955 -- # kill 70796 00:15:16.605 [2024-11-18 04:55:40.079387] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.605 04:55:40 -- common/autotest_common.sh@960 -- # wait 70796 00:15:16.605 [2024-11-18 04:55:40.079519] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:17.985 00:15:17.985 real 0m9.537s 00:15:17.985 user 0m15.674s 00:15:17.985 sys 0m1.389s 00:15:17.985 04:55:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:17.985 04:55:41 -- common/autotest_common.sh@10 -- # set +x 00:15:17.985 ************************************ 00:15:17.985 END TEST raid_state_function_test_sb 00:15:17.985 ************************************ 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:17.985 04:55:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:17.985 04:55:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:17.985 04:55:41 -- common/autotest_common.sh@10 -- # set +x 00:15:17.985 ************************************ 00:15:17.985 START TEST raid_superblock_test 00:15:17.985 ************************************ 00:15:17.985 04:55:41 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 2 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@357 -- # raid_pid=71104 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@358 -- # waitforlisten 71104 /var/tmp/spdk-raid.sock 00:15:17.985 04:55:41 -- common/autotest_common.sh@829 -- # '[' -z 71104 ']' 00:15:17.985 04:55:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.985 04:55:41 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:17.985 04:55:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.985 04:55:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.985 04:55:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.985 04:55:41 -- common/autotest_common.sh@10 -- # set +x 00:15:17.985 [2024-11-18 04:55:41.276312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.985 [2024-11-18 04:55:41.276500] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71104 ] 00:15:17.985 [2024-11-18 04:55:41.443263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.245 [2024-11-18 04:55:41.620254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.504 [2024-11-18 04:55:41.787315] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.763 04:55:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.763 04:55:42 -- common/autotest_common.sh@862 -- # return 0 00:15:18.763 04:55:42 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:18.763 04:55:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:18.763 04:55:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:18.763 04:55:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:18.763 04:55:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:18.763 04:55:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:18.763 04:55:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:18.763 04:55:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:18.763 04:55:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:19.022 malloc1 00:15:19.022 04:55:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:19.281 [2024-11-18 04:55:42.651710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:19.281 [2024-11-18 04:55:42.651828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.281 [2024-11-18 04:55:42.651867] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:15:19.281 [2024-11-18 04:55:42.651880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.281 [2024-11-18 04:55:42.654352] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.281 [2024-11-18 04:55:42.654390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:19.281 pt1 00:15:19.281 04:55:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:19.281 04:55:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:19.281 04:55:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:19.281 04:55:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:19.281 04:55:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:19.281 04:55:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:19.281 04:55:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:19.281 04:55:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:19.281 04:55:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:19.540 malloc2 00:15:19.540 04:55:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:19.798 [2024-11-18 04:55:43.100581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:19.798 [2024-11-18 04:55:43.100678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.798 [2024-11-18 04:55:43.100709] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:15:19.798 [2024-11-18 04:55:43.100722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.798 [2024-11-18 04:55:43.103183] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.798 [2024-11-18 04:55:43.103269] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:19.798 pt2 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:19.798 [2024-11-18 04:55:43.284666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:19.798 [2024-11-18 04:55:43.286637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:19.798 [2024-11-18 04:55:43.286885] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:15:19.798 [2024-11-18 04:55:43.286903] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:19.798 [2024-11-18 04:55:43.287043] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:15:19.798 [2024-11-18 04:55:43.287452] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:15:19.798 [2024-11-18 04:55:43.287513] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:15:19.798 [2024-11-18 04:55:43.287671] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.798 04:55:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.799 04:55:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.799 04:55:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.058 04:55:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.058 "name": "raid_bdev1", 00:15:20.058 "uuid": "765fad07-4d41-4ba3-ab18-3def1eed034a", 00:15:20.058 "strip_size_kb": 0, 00:15:20.058 "state": "online", 00:15:20.058 "raid_level": "raid1", 00:15:20.058 "superblock": true, 00:15:20.058 "num_base_bdevs": 2, 00:15:20.058 "num_base_bdevs_discovered": 2, 00:15:20.058 "num_base_bdevs_operational": 2, 00:15:20.058 "base_bdevs_list": [ 00:15:20.058 { 00:15:20.058 "name": "pt1", 00:15:20.058 "uuid": "4e6c0b4d-5283-522e-a226-b886e2c0cdee", 00:15:20.058 "is_configured": true, 00:15:20.058 "data_offset": 2048, 00:15:20.058 "data_size": 63488 00:15:20.058 }, 00:15:20.058 { 00:15:20.058 "name": "pt2", 00:15:20.058 "uuid": "785220ed-0d2e-5dcf-822f-64433e8f40a7", 00:15:20.058 "is_configured": true, 00:15:20.058 "data_offset": 2048, 00:15:20.058 "data_size": 63488 00:15:20.058 } 00:15:20.058 ] 00:15:20.058 }' 00:15:20.058 04:55:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.058 04:55:43 -- common/autotest_common.sh@10 -- # set +x 00:15:20.317 04:55:43 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:20.317 04:55:43 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:20.576 [2024-11-18 04:55:44.065157] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.576 04:55:44 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=765fad07-4d41-4ba3-ab18-3def1eed034a 00:15:20.576 04:55:44 -- bdev/bdev_raid.sh@380 -- # '[' -z 765fad07-4d41-4ba3-ab18-3def1eed034a ']' 00:15:20.576 04:55:44 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:20.834 [2024-11-18 04:55:44.309023] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.834 [2024-11-18 04:55:44.309076] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.834 [2024-11-18 04:55:44.309164] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.834 [2024-11-18 04:55:44.309252] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.834 [2024-11-18 04:55:44.309268] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:15:20.834 04:55:44 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.834 04:55:44 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:21.093 04:55:44 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:21.093 04:55:44 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:21.093 04:55:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:21.093 04:55:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:21.351 04:55:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:21.351 04:55:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:21.611 04:55:44 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:21.611 04:55:44 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:21.611 04:55:45 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:21.611 04:55:45 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:21.611 04:55:45 -- common/autotest_common.sh@650 -- # local es=0 00:15:21.611 04:55:45 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:21.611 04:55:45 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.611 04:55:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:21.611 04:55:45 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.611 04:55:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:21.611 04:55:45 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.611 04:55:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:21.611 04:55:45 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.611 04:55:45 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:21.611 04:55:45 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:21.870 [2024-11-18 04:55:45.313265] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:21.871 [2024-11-18 04:55:45.315311] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:21.871 [2024-11-18 04:55:45.315400] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:21.871 [2024-11-18 04:55:45.315494] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:21.871 [2024-11-18 04:55:45.315523] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.871 [2024-11-18 04:55:45.315536] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:15:21.871 request: 00:15:21.871 { 00:15:21.871 "name": "raid_bdev1", 00:15:21.871 "raid_level": "raid1", 00:15:21.871 "base_bdevs": [ 00:15:21.871 "malloc1", 00:15:21.871 "malloc2" 00:15:21.871 ], 00:15:21.871 "superblock": false, 00:15:21.871 "method": "bdev_raid_create", 00:15:21.871 "req_id": 1 00:15:21.871 } 00:15:21.871 Got JSON-RPC error response 00:15:21.871 response: 00:15:21.871 { 00:15:21.871 "code": -17, 00:15:21.871 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:21.871 } 00:15:21.871 04:55:45 -- common/autotest_common.sh@653 -- # es=1 00:15:21.871 04:55:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:21.871 04:55:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:21.871 04:55:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:21.871 04:55:45 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.871 04:55:45 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:22.129 04:55:45 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:22.129 04:55:45 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:22.129 04:55:45 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:22.389 [2024-11-18 04:55:45.809292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:22.389 [2024-11-18 04:55:45.809375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.389 [2024-11-18 04:55:45.809406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:15:22.389 [2024-11-18 04:55:45.809419] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.389 [2024-11-18 04:55:45.811852] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.389 [2024-11-18 04:55:45.811892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:22.389 [2024-11-18 04:55:45.812004] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:22.389 [2024-11-18 04:55:45.812056] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:22.389 pt1 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.389 04:55:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.648 04:55:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.648 "name": "raid_bdev1", 00:15:22.648 "uuid": "765fad07-4d41-4ba3-ab18-3def1eed034a", 00:15:22.648 "strip_size_kb": 0, 00:15:22.648 "state": "configuring", 00:15:22.648 "raid_level": "raid1", 00:15:22.648 "superblock": true, 00:15:22.648 "num_base_bdevs": 2, 00:15:22.648 "num_base_bdevs_discovered": 1, 00:15:22.649 "num_base_bdevs_operational": 2, 00:15:22.649 "base_bdevs_list": [ 00:15:22.649 { 00:15:22.649 "name": "pt1", 00:15:22.649 "uuid": "4e6c0b4d-5283-522e-a226-b886e2c0cdee", 00:15:22.649 "is_configured": true, 00:15:22.649 "data_offset": 2048, 00:15:22.649 "data_size": 63488 00:15:22.649 }, 00:15:22.649 { 00:15:22.649 "name": null, 00:15:22.649 "uuid": "785220ed-0d2e-5dcf-822f-64433e8f40a7", 00:15:22.649 "is_configured": false, 00:15:22.649 "data_offset": 2048, 00:15:22.649 "data_size": 63488 00:15:22.649 } 00:15:22.649 ] 00:15:22.649 }' 00:15:22.649 04:55:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.649 04:55:46 -- common/autotest_common.sh@10 -- # set +x 00:15:22.908 04:55:46 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:22.908 04:55:46 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:22.908 04:55:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:22.908 04:55:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:23.167 [2024-11-18 04:55:46.497575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:23.167 [2024-11-18 04:55:46.497838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.167 [2024-11-18 04:55:46.497887] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:15:23.167 [2024-11-18 04:55:46.497902] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.167 [2024-11-18 04:55:46.498473] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.167 [2024-11-18 04:55:46.498498] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:23.167 [2024-11-18 04:55:46.498634] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:23.167 [2024-11-18 04:55:46.498685] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:23.167 [2024-11-18 04:55:46.498842] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:15:23.167 [2024-11-18 04:55:46.498858] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:23.167 [2024-11-18 04:55:46.498999] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:23.167 [2024-11-18 04:55:46.499407] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:15:23.167 [2024-11-18 04:55:46.499427] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:15:23.167 [2024-11-18 04:55:46.499587] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.167 pt2 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.167 04:55:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.426 04:55:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.426 "name": "raid_bdev1", 00:15:23.426 "uuid": "765fad07-4d41-4ba3-ab18-3def1eed034a", 00:15:23.426 "strip_size_kb": 0, 00:15:23.426 "state": "online", 00:15:23.426 "raid_level": "raid1", 00:15:23.426 "superblock": true, 00:15:23.426 "num_base_bdevs": 2, 00:15:23.426 "num_base_bdevs_discovered": 2, 00:15:23.426 "num_base_bdevs_operational": 2, 00:15:23.426 "base_bdevs_list": [ 00:15:23.426 { 00:15:23.426 "name": "pt1", 00:15:23.426 "uuid": "4e6c0b4d-5283-522e-a226-b886e2c0cdee", 00:15:23.426 "is_configured": true, 00:15:23.426 "data_offset": 2048, 00:15:23.426 "data_size": 63488 00:15:23.426 }, 00:15:23.426 { 00:15:23.426 "name": "pt2", 00:15:23.426 "uuid": "785220ed-0d2e-5dcf-822f-64433e8f40a7", 00:15:23.426 "is_configured": true, 00:15:23.426 "data_offset": 2048, 00:15:23.426 "data_size": 63488 00:15:23.426 } 00:15:23.426 ] 00:15:23.426 }' 00:15:23.426 04:55:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.426 04:55:46 -- common/autotest_common.sh@10 -- # set +x 00:15:23.685 04:55:47 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:23.685 04:55:47 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:23.943 [2024-11-18 04:55:47.229990] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.943 04:55:47 -- bdev/bdev_raid.sh@430 -- # '[' 765fad07-4d41-4ba3-ab18-3def1eed034a '!=' 765fad07-4d41-4ba3-ab18-3def1eed034a ']' 00:15:23.943 04:55:47 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:23.943 04:55:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:23.943 04:55:47 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:23.943 04:55:47 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:24.202 [2024-11-18 04:55:47.485880] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.202 04:55:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.460 04:55:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.460 "name": "raid_bdev1", 00:15:24.460 "uuid": "765fad07-4d41-4ba3-ab18-3def1eed034a", 00:15:24.460 "strip_size_kb": 0, 00:15:24.461 "state": "online", 00:15:24.461 "raid_level": "raid1", 00:15:24.461 "superblock": true, 00:15:24.461 "num_base_bdevs": 2, 00:15:24.461 "num_base_bdevs_discovered": 1, 00:15:24.461 "num_base_bdevs_operational": 1, 00:15:24.461 "base_bdevs_list": [ 00:15:24.461 { 00:15:24.461 "name": null, 00:15:24.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.461 "is_configured": false, 00:15:24.461 "data_offset": 2048, 00:15:24.461 "data_size": 63488 00:15:24.461 }, 00:15:24.461 { 00:15:24.461 "name": "pt2", 00:15:24.461 "uuid": "785220ed-0d2e-5dcf-822f-64433e8f40a7", 00:15:24.461 "is_configured": true, 00:15:24.461 "data_offset": 2048, 00:15:24.461 "data_size": 63488 00:15:24.461 } 00:15:24.461 ] 00:15:24.461 }' 00:15:24.461 04:55:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.461 04:55:47 -- common/autotest_common.sh@10 -- # set +x 00:15:24.720 04:55:48 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:24.979 [2024-11-18 04:55:48.270103] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.979 [2024-11-18 04:55:48.270136] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.979 [2024-11-18 04:55:48.270218] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.979 [2024-11-18 04:55:48.270297] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.979 [2024-11-18 04:55:48.270336] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:15:24.979 04:55:48 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.979 04:55:48 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@462 -- # i=1 00:15:25.238 04:55:48 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:25.497 [2024-11-18 04:55:48.914186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:25.497 [2024-11-18 04:55:48.914472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.497 [2024-11-18 04:55:48.914513] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:15:25.497 [2024-11-18 04:55:48.914531] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.497 [2024-11-18 04:55:48.916880] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.497 [2024-11-18 04:55:48.916926] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:25.497 [2024-11-18 04:55:48.917032] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:25.497 [2024-11-18 04:55:48.917090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:25.497 [2024-11-18 04:55:48.917193] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:15:25.497 [2024-11-18 04:55:48.917228] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:25.497 [2024-11-18 04:55:48.917321] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:15:25.497 [2024-11-18 04:55:48.917666] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:15:25.497 [2024-11-18 04:55:48.917682] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:15:25.497 [2024-11-18 04:55:48.917824] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.497 pt2 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.497 04:55:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.756 04:55:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.756 "name": "raid_bdev1", 00:15:25.756 "uuid": "765fad07-4d41-4ba3-ab18-3def1eed034a", 00:15:25.756 "strip_size_kb": 0, 00:15:25.756 "state": "online", 00:15:25.756 "raid_level": "raid1", 00:15:25.756 "superblock": true, 00:15:25.756 "num_base_bdevs": 2, 00:15:25.756 "num_base_bdevs_discovered": 1, 00:15:25.756 "num_base_bdevs_operational": 1, 00:15:25.756 "base_bdevs_list": [ 00:15:25.756 { 00:15:25.756 "name": null, 00:15:25.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.756 "is_configured": false, 00:15:25.756 "data_offset": 2048, 00:15:25.756 "data_size": 63488 00:15:25.756 }, 00:15:25.756 { 00:15:25.756 "name": "pt2", 00:15:25.757 "uuid": "785220ed-0d2e-5dcf-822f-64433e8f40a7", 00:15:25.757 "is_configured": true, 00:15:25.757 "data_offset": 2048, 00:15:25.757 "data_size": 63488 00:15:25.757 } 00:15:25.757 ] 00:15:25.757 }' 00:15:25.757 04:55:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.757 04:55:49 -- common/autotest_common.sh@10 -- # set +x 00:15:26.017 04:55:49 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:15:26.017 04:55:49 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:26.017 04:55:49 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:26.280 [2024-11-18 04:55:49.650660] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.280 04:55:49 -- bdev/bdev_raid.sh@506 -- # '[' 765fad07-4d41-4ba3-ab18-3def1eed034a '!=' 765fad07-4d41-4ba3-ab18-3def1eed034a ']' 00:15:26.280 04:55:49 -- bdev/bdev_raid.sh@511 -- # killprocess 71104 00:15:26.280 04:55:49 -- common/autotest_common.sh@936 -- # '[' -z 71104 ']' 00:15:26.280 04:55:49 -- common/autotest_common.sh@940 -- # kill -0 71104 00:15:26.280 04:55:49 -- common/autotest_common.sh@941 -- # uname 00:15:26.280 04:55:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.280 04:55:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71104 00:15:26.280 killing process with pid 71104 00:15:26.281 04:55:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:26.281 04:55:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:26.281 04:55:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71104' 00:15:26.281 04:55:49 -- common/autotest_common.sh@955 -- # kill 71104 00:15:26.281 [2024-11-18 04:55:49.697583] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.281 04:55:49 -- common/autotest_common.sh@960 -- # wait 71104 00:15:26.281 [2024-11-18 04:55:49.697674] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.281 [2024-11-18 04:55:49.697742] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.281 [2024-11-18 04:55:49.697758] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:15:26.540 [2024-11-18 04:55:49.841015] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:27.478 00:15:27.478 real 0m9.640s 00:15:27.478 user 0m16.030s 00:15:27.478 sys 0m1.405s 00:15:27.478 04:55:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:27.478 04:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:27.478 ************************************ 00:15:27.478 END TEST raid_superblock_test 00:15:27.478 ************************************ 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:27.478 04:55:50 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:27.478 04:55:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:27.478 04:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:27.478 ************************************ 00:15:27.478 START TEST raid_state_function_test 00:15:27.478 ************************************ 00:15:27.478 04:55:50 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 false 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:27.478 Process raid pid: 71419 00:15:27.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:27.478 04:55:50 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:27.479 04:55:50 -- bdev/bdev_raid.sh@226 -- # raid_pid=71419 00:15:27.479 04:55:50 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 71419' 00:15:27.479 04:55:50 -- bdev/bdev_raid.sh@228 -- # waitforlisten 71419 /var/tmp/spdk-raid.sock 00:15:27.479 04:55:50 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:27.479 04:55:50 -- common/autotest_common.sh@829 -- # '[' -z 71419 ']' 00:15:27.479 04:55:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:27.479 04:55:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.479 04:55:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:27.479 04:55:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.479 04:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:27.479 [2024-11-18 04:55:50.956777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:27.479 [2024-11-18 04:55:50.956887] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.738 [2024-11-18 04:55:51.121111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.997 [2024-11-18 04:55:51.293911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.997 [2024-11-18 04:55:51.481142] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.565 04:55:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.565 04:55:51 -- common/autotest_common.sh@862 -- # return 0 00:15:28.565 04:55:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:28.824 [2024-11-18 04:55:52.094951] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.824 [2024-11-18 04:55:52.095034] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.824 [2024-11-18 04:55:52.095050] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:28.824 [2024-11-18 04:55:52.095066] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:28.824 [2024-11-18 04:55:52.095089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:28.824 [2024-11-18 04:55:52.095102] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.824 04:55:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.083 04:55:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.083 "name": "Existed_Raid", 00:15:29.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.083 "strip_size_kb": 64, 00:15:29.083 "state": "configuring", 00:15:29.083 "raid_level": "raid0", 00:15:29.083 "superblock": false, 00:15:29.083 "num_base_bdevs": 3, 00:15:29.083 "num_base_bdevs_discovered": 0, 00:15:29.083 "num_base_bdevs_operational": 3, 00:15:29.083 "base_bdevs_list": [ 00:15:29.083 { 00:15:29.083 "name": "BaseBdev1", 00:15:29.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.083 "is_configured": false, 00:15:29.083 "data_offset": 0, 00:15:29.083 "data_size": 0 00:15:29.083 }, 00:15:29.083 { 00:15:29.083 "name": "BaseBdev2", 00:15:29.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.083 "is_configured": false, 00:15:29.083 "data_offset": 0, 00:15:29.083 "data_size": 0 00:15:29.083 }, 00:15:29.083 { 00:15:29.083 "name": "BaseBdev3", 00:15:29.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.083 "is_configured": false, 00:15:29.083 "data_offset": 0, 00:15:29.083 "data_size": 0 00:15:29.083 } 00:15:29.083 ] 00:15:29.083 }' 00:15:29.083 04:55:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.083 04:55:52 -- common/autotest_common.sh@10 -- # set +x 00:15:29.342 04:55:52 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:29.601 [2024-11-18 04:55:52.923023] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.601 [2024-11-18 04:55:52.923076] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:29.601 04:55:52 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:29.860 [2024-11-18 04:55:53.139192] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.860 [2024-11-18 04:55:53.139261] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.860 [2024-11-18 04:55:53.139275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.860 [2024-11-18 04:55:53.139292] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.860 [2024-11-18 04:55:53.139300] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.860 [2024-11-18 04:55:53.139312] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.860 04:55:53 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.119 [2024-11-18 04:55:53.424945] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.119 BaseBdev1 00:15:30.119 04:55:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:30.119 04:55:53 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:30.119 04:55:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:30.119 04:55:53 -- common/autotest_common.sh@899 -- # local i 00:15:30.119 04:55:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:30.119 04:55:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:30.119 04:55:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.378 04:55:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.378 [ 00:15:30.378 { 00:15:30.378 "name": "BaseBdev1", 00:15:30.378 "aliases": [ 00:15:30.378 "4ad9a2b1-14a2-4cab-88a9-3310a4d1938a" 00:15:30.378 ], 00:15:30.378 "product_name": "Malloc disk", 00:15:30.378 "block_size": 512, 00:15:30.378 "num_blocks": 65536, 00:15:30.378 "uuid": "4ad9a2b1-14a2-4cab-88a9-3310a4d1938a", 00:15:30.378 "assigned_rate_limits": { 00:15:30.378 "rw_ios_per_sec": 0, 00:15:30.378 "rw_mbytes_per_sec": 0, 00:15:30.378 "r_mbytes_per_sec": 0, 00:15:30.378 "w_mbytes_per_sec": 0 00:15:30.378 }, 00:15:30.378 "claimed": true, 00:15:30.378 "claim_type": "exclusive_write", 00:15:30.378 "zoned": false, 00:15:30.378 "supported_io_types": { 00:15:30.378 "read": true, 00:15:30.378 "write": true, 00:15:30.378 "unmap": true, 00:15:30.378 "write_zeroes": true, 00:15:30.378 "flush": true, 00:15:30.378 "reset": true, 00:15:30.378 "compare": false, 00:15:30.378 "compare_and_write": false, 00:15:30.378 "abort": true, 00:15:30.378 "nvme_admin": false, 00:15:30.378 "nvme_io": false 00:15:30.378 }, 00:15:30.378 "memory_domains": [ 00:15:30.378 { 00:15:30.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.378 "dma_device_type": 2 00:15:30.378 } 00:15:30.378 ], 00:15:30.378 "driver_specific": {} 00:15:30.378 } 00:15:30.378 ] 00:15:30.378 04:55:53 -- common/autotest_common.sh@905 -- # return 0 00:15:30.378 04:55:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:30.378 04:55:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.378 04:55:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:30.378 04:55:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:30.379 04:55:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.379 04:55:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:30.379 04:55:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.379 04:55:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.379 04:55:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.379 04:55:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.379 04:55:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.379 04:55:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.638 04:55:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.638 "name": "Existed_Raid", 00:15:30.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.638 "strip_size_kb": 64, 00:15:30.638 "state": "configuring", 00:15:30.638 "raid_level": "raid0", 00:15:30.638 "superblock": false, 00:15:30.638 "num_base_bdevs": 3, 00:15:30.638 "num_base_bdevs_discovered": 1, 00:15:30.638 "num_base_bdevs_operational": 3, 00:15:30.638 "base_bdevs_list": [ 00:15:30.638 { 00:15:30.638 "name": "BaseBdev1", 00:15:30.638 "uuid": "4ad9a2b1-14a2-4cab-88a9-3310a4d1938a", 00:15:30.638 "is_configured": true, 00:15:30.638 "data_offset": 0, 00:15:30.638 "data_size": 65536 00:15:30.638 }, 00:15:30.638 { 00:15:30.638 "name": "BaseBdev2", 00:15:30.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.638 "is_configured": false, 00:15:30.638 "data_offset": 0, 00:15:30.638 "data_size": 0 00:15:30.638 }, 00:15:30.638 { 00:15:30.638 "name": "BaseBdev3", 00:15:30.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.638 "is_configured": false, 00:15:30.638 "data_offset": 0, 00:15:30.638 "data_size": 0 00:15:30.638 } 00:15:30.638 ] 00:15:30.638 }' 00:15:30.638 04:55:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.638 04:55:54 -- common/autotest_common.sh@10 -- # set +x 00:15:30.897 04:55:54 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:31.156 [2024-11-18 04:55:54.605325] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.156 [2024-11-18 04:55:54.605411] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:31.156 04:55:54 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:31.156 04:55:54 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:31.415 [2024-11-18 04:55:54.809417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.415 [2024-11-18 04:55:54.811591] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.415 [2024-11-18 04:55:54.811645] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.415 [2024-11-18 04:55:54.811660] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.415 [2024-11-18 04:55:54.811675] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.415 04:55:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.674 04:55:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.674 "name": "Existed_Raid", 00:15:31.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.674 "strip_size_kb": 64, 00:15:31.674 "state": "configuring", 00:15:31.674 "raid_level": "raid0", 00:15:31.674 "superblock": false, 00:15:31.674 "num_base_bdevs": 3, 00:15:31.674 "num_base_bdevs_discovered": 1, 00:15:31.674 "num_base_bdevs_operational": 3, 00:15:31.674 "base_bdevs_list": [ 00:15:31.674 { 00:15:31.674 "name": "BaseBdev1", 00:15:31.674 "uuid": "4ad9a2b1-14a2-4cab-88a9-3310a4d1938a", 00:15:31.674 "is_configured": true, 00:15:31.674 "data_offset": 0, 00:15:31.674 "data_size": 65536 00:15:31.674 }, 00:15:31.674 { 00:15:31.674 "name": "BaseBdev2", 00:15:31.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.674 "is_configured": false, 00:15:31.674 "data_offset": 0, 00:15:31.674 "data_size": 0 00:15:31.674 }, 00:15:31.674 { 00:15:31.674 "name": "BaseBdev3", 00:15:31.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.674 "is_configured": false, 00:15:31.674 "data_offset": 0, 00:15:31.674 "data_size": 0 00:15:31.674 } 00:15:31.674 ] 00:15:31.674 }' 00:15:31.674 04:55:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.674 04:55:55 -- common/autotest_common.sh@10 -- # set +x 00:15:31.933 04:55:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:32.192 [2024-11-18 04:55:55.608548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.192 BaseBdev2 00:15:32.192 04:55:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:32.192 04:55:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:32.192 04:55:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:32.192 04:55:55 -- common/autotest_common.sh@899 -- # local i 00:15:32.192 04:55:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:32.192 04:55:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:32.192 04:55:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:32.451 04:55:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:32.710 [ 00:15:32.710 { 00:15:32.710 "name": "BaseBdev2", 00:15:32.710 "aliases": [ 00:15:32.710 "f85be6c6-d39b-4bb1-9c9b-b97251d7570d" 00:15:32.710 ], 00:15:32.710 "product_name": "Malloc disk", 00:15:32.710 "block_size": 512, 00:15:32.710 "num_blocks": 65536, 00:15:32.710 "uuid": "f85be6c6-d39b-4bb1-9c9b-b97251d7570d", 00:15:32.710 "assigned_rate_limits": { 00:15:32.710 "rw_ios_per_sec": 0, 00:15:32.710 "rw_mbytes_per_sec": 0, 00:15:32.710 "r_mbytes_per_sec": 0, 00:15:32.710 "w_mbytes_per_sec": 0 00:15:32.710 }, 00:15:32.710 "claimed": true, 00:15:32.710 "claim_type": "exclusive_write", 00:15:32.710 "zoned": false, 00:15:32.710 "supported_io_types": { 00:15:32.710 "read": true, 00:15:32.710 "write": true, 00:15:32.710 "unmap": true, 00:15:32.710 "write_zeroes": true, 00:15:32.710 "flush": true, 00:15:32.710 "reset": true, 00:15:32.710 "compare": false, 00:15:32.710 "compare_and_write": false, 00:15:32.710 "abort": true, 00:15:32.710 "nvme_admin": false, 00:15:32.710 "nvme_io": false 00:15:32.710 }, 00:15:32.710 "memory_domains": [ 00:15:32.710 { 00:15:32.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.710 "dma_device_type": 2 00:15:32.710 } 00:15:32.710 ], 00:15:32.710 "driver_specific": {} 00:15:32.710 } 00:15:32.710 ] 00:15:32.710 04:55:56 -- common/autotest_common.sh@905 -- # return 0 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.710 04:55:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.969 04:55:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.969 "name": "Existed_Raid", 00:15:32.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.969 "strip_size_kb": 64, 00:15:32.969 "state": "configuring", 00:15:32.969 "raid_level": "raid0", 00:15:32.969 "superblock": false, 00:15:32.969 "num_base_bdevs": 3, 00:15:32.969 "num_base_bdevs_discovered": 2, 00:15:32.969 "num_base_bdevs_operational": 3, 00:15:32.969 "base_bdevs_list": [ 00:15:32.969 { 00:15:32.969 "name": "BaseBdev1", 00:15:32.969 "uuid": "4ad9a2b1-14a2-4cab-88a9-3310a4d1938a", 00:15:32.969 "is_configured": true, 00:15:32.969 "data_offset": 0, 00:15:32.969 "data_size": 65536 00:15:32.969 }, 00:15:32.969 { 00:15:32.969 "name": "BaseBdev2", 00:15:32.969 "uuid": "f85be6c6-d39b-4bb1-9c9b-b97251d7570d", 00:15:32.969 "is_configured": true, 00:15:32.969 "data_offset": 0, 00:15:32.969 "data_size": 65536 00:15:32.969 }, 00:15:32.969 { 00:15:32.969 "name": "BaseBdev3", 00:15:32.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.969 "is_configured": false, 00:15:32.969 "data_offset": 0, 00:15:32.969 "data_size": 0 00:15:32.969 } 00:15:32.969 ] 00:15:32.969 }' 00:15:32.969 04:55:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.969 04:55:56 -- common/autotest_common.sh@10 -- # set +x 00:15:33.228 04:55:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:33.486 [2024-11-18 04:55:56.821615] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.486 [2024-11-18 04:55:56.821666] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:15:33.486 [2024-11-18 04:55:56.821691] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:33.486 [2024-11-18 04:55:56.821806] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:33.486 [2024-11-18 04:55:56.822159] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:15:33.486 [2024-11-18 04:55:56.822175] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:15:33.486 [2024-11-18 04:55:56.822473] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.486 BaseBdev3 00:15:33.486 04:55:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:33.486 04:55:56 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:33.486 04:55:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:33.486 04:55:56 -- common/autotest_common.sh@899 -- # local i 00:15:33.486 04:55:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:33.487 04:55:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:33.487 04:55:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.746 04:55:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:33.746 [ 00:15:33.746 { 00:15:33.746 "name": "BaseBdev3", 00:15:33.746 "aliases": [ 00:15:33.746 "be3a6e50-275c-44d2-a02a-f6ca1e7163c9" 00:15:33.746 ], 00:15:33.746 "product_name": "Malloc disk", 00:15:33.746 "block_size": 512, 00:15:33.746 "num_blocks": 65536, 00:15:33.746 "uuid": "be3a6e50-275c-44d2-a02a-f6ca1e7163c9", 00:15:33.746 "assigned_rate_limits": { 00:15:33.746 "rw_ios_per_sec": 0, 00:15:33.746 "rw_mbytes_per_sec": 0, 00:15:33.746 "r_mbytes_per_sec": 0, 00:15:33.746 "w_mbytes_per_sec": 0 00:15:33.746 }, 00:15:33.746 "claimed": true, 00:15:33.746 "claim_type": "exclusive_write", 00:15:33.746 "zoned": false, 00:15:33.746 "supported_io_types": { 00:15:33.746 "read": true, 00:15:33.746 "write": true, 00:15:33.746 "unmap": true, 00:15:33.746 "write_zeroes": true, 00:15:33.746 "flush": true, 00:15:33.746 "reset": true, 00:15:33.746 "compare": false, 00:15:33.746 "compare_and_write": false, 00:15:33.746 "abort": true, 00:15:33.746 "nvme_admin": false, 00:15:33.746 "nvme_io": false 00:15:33.746 }, 00:15:33.746 "memory_domains": [ 00:15:33.746 { 00:15:33.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.746 "dma_device_type": 2 00:15:33.746 } 00:15:33.746 ], 00:15:33.746 "driver_specific": {} 00:15:33.746 } 00:15:33.746 ] 00:15:33.746 04:55:57 -- common/autotest_common.sh@905 -- # return 0 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.746 04:55:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.005 04:55:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.005 "name": "Existed_Raid", 00:15:34.005 "uuid": "480155a4-a52e-43dc-838b-98343b76dbb2", 00:15:34.005 "strip_size_kb": 64, 00:15:34.005 "state": "online", 00:15:34.005 "raid_level": "raid0", 00:15:34.005 "superblock": false, 00:15:34.005 "num_base_bdevs": 3, 00:15:34.005 "num_base_bdevs_discovered": 3, 00:15:34.005 "num_base_bdevs_operational": 3, 00:15:34.005 "base_bdevs_list": [ 00:15:34.005 { 00:15:34.005 "name": "BaseBdev1", 00:15:34.005 "uuid": "4ad9a2b1-14a2-4cab-88a9-3310a4d1938a", 00:15:34.005 "is_configured": true, 00:15:34.005 "data_offset": 0, 00:15:34.005 "data_size": 65536 00:15:34.005 }, 00:15:34.005 { 00:15:34.005 "name": "BaseBdev2", 00:15:34.005 "uuid": "f85be6c6-d39b-4bb1-9c9b-b97251d7570d", 00:15:34.005 "is_configured": true, 00:15:34.005 "data_offset": 0, 00:15:34.005 "data_size": 65536 00:15:34.005 }, 00:15:34.005 { 00:15:34.005 "name": "BaseBdev3", 00:15:34.005 "uuid": "be3a6e50-275c-44d2-a02a-f6ca1e7163c9", 00:15:34.005 "is_configured": true, 00:15:34.005 "data_offset": 0, 00:15:34.005 "data_size": 65536 00:15:34.005 } 00:15:34.005 ] 00:15:34.005 }' 00:15:34.005 04:55:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.005 04:55:57 -- common/autotest_common.sh@10 -- # set +x 00:15:34.264 04:55:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:34.522 [2024-11-18 04:55:57.890085] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.522 [2024-11-18 04:55:57.890127] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.522 [2024-11-18 04:55:57.890183] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.522 04:55:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.782 04:55:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.782 "name": "Existed_Raid", 00:15:34.782 "uuid": "480155a4-a52e-43dc-838b-98343b76dbb2", 00:15:34.782 "strip_size_kb": 64, 00:15:34.782 "state": "offline", 00:15:34.782 "raid_level": "raid0", 00:15:34.782 "superblock": false, 00:15:34.782 "num_base_bdevs": 3, 00:15:34.782 "num_base_bdevs_discovered": 2, 00:15:34.782 "num_base_bdevs_operational": 2, 00:15:34.782 "base_bdevs_list": [ 00:15:34.782 { 00:15:34.782 "name": null, 00:15:34.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.782 "is_configured": false, 00:15:34.782 "data_offset": 0, 00:15:34.782 "data_size": 65536 00:15:34.782 }, 00:15:34.782 { 00:15:34.782 "name": "BaseBdev2", 00:15:34.782 "uuid": "f85be6c6-d39b-4bb1-9c9b-b97251d7570d", 00:15:34.782 "is_configured": true, 00:15:34.782 "data_offset": 0, 00:15:34.782 "data_size": 65536 00:15:34.782 }, 00:15:34.782 { 00:15:34.782 "name": "BaseBdev3", 00:15:34.782 "uuid": "be3a6e50-275c-44d2-a02a-f6ca1e7163c9", 00:15:34.782 "is_configured": true, 00:15:34.782 "data_offset": 0, 00:15:34.782 "data_size": 65536 00:15:34.782 } 00:15:34.782 ] 00:15:34.782 }' 00:15:34.782 04:55:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.782 04:55:58 -- common/autotest_common.sh@10 -- # set +x 00:15:35.041 04:55:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:35.041 04:55:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:35.041 04:55:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.041 04:55:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:35.300 04:55:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:35.300 04:55:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.300 04:55:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:35.559 [2024-11-18 04:55:58.884379] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:35.559 04:55:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:35.559 04:55:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:35.559 04:55:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:35.559 04:55:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.824 04:55:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:35.824 04:55:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.824 04:55:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:36.088 [2024-11-18 04:55:59.407134] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:36.088 [2024-11-18 04:55:59.407287] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:15:36.088 04:55:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:36.088 04:55:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:36.088 04:55:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:36.088 04:55:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.347 04:55:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:36.347 04:55:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:36.347 04:55:59 -- bdev/bdev_raid.sh@287 -- # killprocess 71419 00:15:36.347 04:55:59 -- common/autotest_common.sh@936 -- # '[' -z 71419 ']' 00:15:36.347 04:55:59 -- common/autotest_common.sh@940 -- # kill -0 71419 00:15:36.347 04:55:59 -- common/autotest_common.sh@941 -- # uname 00:15:36.347 04:55:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:36.347 04:55:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71419 00:15:36.347 04:55:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:36.347 04:55:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:36.347 killing process with pid 71419 00:15:36.347 04:55:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71419' 00:15:36.347 04:55:59 -- common/autotest_common.sh@955 -- # kill 71419 00:15:36.347 [2024-11-18 04:55:59.768868] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.347 04:55:59 -- common/autotest_common.sh@960 -- # wait 71419 00:15:36.347 [2024-11-18 04:55:59.768976] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:37.284 04:56:00 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:37.284 00:15:37.284 real 0m9.869s 00:15:37.284 user 0m16.288s 00:15:37.284 sys 0m1.512s 00:15:37.284 04:56:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:37.284 04:56:00 -- common/autotest_common.sh@10 -- # set +x 00:15:37.284 ************************************ 00:15:37.284 END TEST raid_state_function_test 00:15:37.284 ************************************ 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:37.543 04:56:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:37.543 04:56:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.543 04:56:00 -- common/autotest_common.sh@10 -- # set +x 00:15:37.543 ************************************ 00:15:37.543 START TEST raid_state_function_test_sb 00:15:37.543 ************************************ 00:15:37.543 04:56:00 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 true 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=71758 00:15:37.543 Process raid pid: 71758 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 71758' 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:37.543 04:56:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 71758 /var/tmp/spdk-raid.sock 00:15:37.543 04:56:00 -- common/autotest_common.sh@829 -- # '[' -z 71758 ']' 00:15:37.543 04:56:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:37.543 04:56:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:37.543 04:56:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:37.543 04:56:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.543 04:56:00 -- common/autotest_common.sh@10 -- # set +x 00:15:37.543 [2024-11-18 04:56:00.896079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:37.543 [2024-11-18 04:56:00.896251] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.802 [2024-11-18 04:56:01.066814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.802 [2024-11-18 04:56:01.242702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.061 [2024-11-18 04:56:01.394883] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.320 04:56:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.320 04:56:01 -- common/autotest_common.sh@862 -- # return 0 00:15:38.320 04:56:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:38.579 [2024-11-18 04:56:01.940109] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.579 [2024-11-18 04:56:01.940218] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.579 [2024-11-18 04:56:01.940252] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.579 [2024-11-18 04:56:01.940270] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.579 [2024-11-18 04:56:01.940279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.579 [2024-11-18 04:56:01.940293] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.579 04:56:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.838 04:56:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.838 "name": "Existed_Raid", 00:15:38.838 "uuid": "2cf6d5b4-6f6b-4525-ba5f-8dfc17f36e10", 00:15:38.838 "strip_size_kb": 64, 00:15:38.838 "state": "configuring", 00:15:38.838 "raid_level": "raid0", 00:15:38.838 "superblock": true, 00:15:38.838 "num_base_bdevs": 3, 00:15:38.838 "num_base_bdevs_discovered": 0, 00:15:38.838 "num_base_bdevs_operational": 3, 00:15:38.838 "base_bdevs_list": [ 00:15:38.838 { 00:15:38.838 "name": "BaseBdev1", 00:15:38.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.838 "is_configured": false, 00:15:38.838 "data_offset": 0, 00:15:38.838 "data_size": 0 00:15:38.838 }, 00:15:38.838 { 00:15:38.838 "name": "BaseBdev2", 00:15:38.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.838 "is_configured": false, 00:15:38.838 "data_offset": 0, 00:15:38.838 "data_size": 0 00:15:38.838 }, 00:15:38.838 { 00:15:38.838 "name": "BaseBdev3", 00:15:38.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.838 "is_configured": false, 00:15:38.838 "data_offset": 0, 00:15:38.838 "data_size": 0 00:15:38.838 } 00:15:38.838 ] 00:15:38.838 }' 00:15:38.838 04:56:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.838 04:56:02 -- common/autotest_common.sh@10 -- # set +x 00:15:39.097 04:56:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:39.356 [2024-11-18 04:56:02.680149] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.356 [2024-11-18 04:56:02.680236] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:39.356 04:56:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:39.356 [2024-11-18 04:56:02.868244] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.356 [2024-11-18 04:56:02.868323] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.356 [2024-11-18 04:56:02.868337] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.356 [2024-11-18 04:56:02.868354] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.356 [2024-11-18 04:56:02.868363] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.356 [2024-11-18 04:56:02.868376] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.615 04:56:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.616 [2024-11-18 04:56:03.085765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.616 BaseBdev1 00:15:39.616 04:56:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:39.616 04:56:03 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:39.616 04:56:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:39.616 04:56:03 -- common/autotest_common.sh@899 -- # local i 00:15:39.616 04:56:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:39.616 04:56:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:39.616 04:56:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.900 04:56:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:40.210 [ 00:15:40.210 { 00:15:40.210 "name": "BaseBdev1", 00:15:40.210 "aliases": [ 00:15:40.210 "d8d78a20-b135-4788-a145-8dc78cff8068" 00:15:40.210 ], 00:15:40.210 "product_name": "Malloc disk", 00:15:40.210 "block_size": 512, 00:15:40.210 "num_blocks": 65536, 00:15:40.210 "uuid": "d8d78a20-b135-4788-a145-8dc78cff8068", 00:15:40.210 "assigned_rate_limits": { 00:15:40.210 "rw_ios_per_sec": 0, 00:15:40.210 "rw_mbytes_per_sec": 0, 00:15:40.210 "r_mbytes_per_sec": 0, 00:15:40.210 "w_mbytes_per_sec": 0 00:15:40.210 }, 00:15:40.210 "claimed": true, 00:15:40.210 "claim_type": "exclusive_write", 00:15:40.210 "zoned": false, 00:15:40.210 "supported_io_types": { 00:15:40.210 "read": true, 00:15:40.210 "write": true, 00:15:40.210 "unmap": true, 00:15:40.210 "write_zeroes": true, 00:15:40.210 "flush": true, 00:15:40.210 "reset": true, 00:15:40.210 "compare": false, 00:15:40.210 "compare_and_write": false, 00:15:40.210 "abort": true, 00:15:40.210 "nvme_admin": false, 00:15:40.210 "nvme_io": false 00:15:40.210 }, 00:15:40.210 "memory_domains": [ 00:15:40.210 { 00:15:40.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.210 "dma_device_type": 2 00:15:40.210 } 00:15:40.210 ], 00:15:40.210 "driver_specific": {} 00:15:40.210 } 00:15:40.210 ] 00:15:40.210 04:56:03 -- common/autotest_common.sh@905 -- # return 0 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.210 04:56:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.468 04:56:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.468 "name": "Existed_Raid", 00:15:40.468 "uuid": "2ca1c3e7-138c-4849-acf8-52c4d45f32fd", 00:15:40.468 "strip_size_kb": 64, 00:15:40.468 "state": "configuring", 00:15:40.468 "raid_level": "raid0", 00:15:40.468 "superblock": true, 00:15:40.468 "num_base_bdevs": 3, 00:15:40.468 "num_base_bdevs_discovered": 1, 00:15:40.468 "num_base_bdevs_operational": 3, 00:15:40.468 "base_bdevs_list": [ 00:15:40.468 { 00:15:40.468 "name": "BaseBdev1", 00:15:40.468 "uuid": "d8d78a20-b135-4788-a145-8dc78cff8068", 00:15:40.468 "is_configured": true, 00:15:40.468 "data_offset": 2048, 00:15:40.468 "data_size": 63488 00:15:40.468 }, 00:15:40.468 { 00:15:40.468 "name": "BaseBdev2", 00:15:40.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.468 "is_configured": false, 00:15:40.468 "data_offset": 0, 00:15:40.468 "data_size": 0 00:15:40.468 }, 00:15:40.468 { 00:15:40.468 "name": "BaseBdev3", 00:15:40.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.468 "is_configured": false, 00:15:40.468 "data_offset": 0, 00:15:40.468 "data_size": 0 00:15:40.468 } 00:15:40.468 ] 00:15:40.468 }' 00:15:40.468 04:56:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.468 04:56:03 -- common/autotest_common.sh@10 -- # set +x 00:15:40.728 04:56:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:40.728 [2024-11-18 04:56:04.218089] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.728 [2024-11-18 04:56:04.218166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:40.728 04:56:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:40.728 04:56:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:40.987 04:56:04 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:41.247 BaseBdev1 00:15:41.247 04:56:04 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:41.247 04:56:04 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:41.247 04:56:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:41.247 04:56:04 -- common/autotest_common.sh@899 -- # local i 00:15:41.247 04:56:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:41.247 04:56:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:41.247 04:56:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.505 04:56:04 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:41.764 [ 00:15:41.764 { 00:15:41.764 "name": "BaseBdev1", 00:15:41.764 "aliases": [ 00:15:41.764 "6b089986-2056-49d5-9d4b-995158e57186" 00:15:41.764 ], 00:15:41.764 "product_name": "Malloc disk", 00:15:41.764 "block_size": 512, 00:15:41.764 "num_blocks": 65536, 00:15:41.764 "uuid": "6b089986-2056-49d5-9d4b-995158e57186", 00:15:41.764 "assigned_rate_limits": { 00:15:41.764 "rw_ios_per_sec": 0, 00:15:41.764 "rw_mbytes_per_sec": 0, 00:15:41.764 "r_mbytes_per_sec": 0, 00:15:41.764 "w_mbytes_per_sec": 0 00:15:41.764 }, 00:15:41.764 "claimed": false, 00:15:41.764 "zoned": false, 00:15:41.764 "supported_io_types": { 00:15:41.764 "read": true, 00:15:41.764 "write": true, 00:15:41.764 "unmap": true, 00:15:41.764 "write_zeroes": true, 00:15:41.764 "flush": true, 00:15:41.764 "reset": true, 00:15:41.764 "compare": false, 00:15:41.764 "compare_and_write": false, 00:15:41.764 "abort": true, 00:15:41.764 "nvme_admin": false, 00:15:41.764 "nvme_io": false 00:15:41.764 }, 00:15:41.764 "memory_domains": [ 00:15:41.764 { 00:15:41.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.764 "dma_device_type": 2 00:15:41.764 } 00:15:41.764 ], 00:15:41.764 "driver_specific": {} 00:15:41.764 } 00:15:41.764 ] 00:15:41.764 04:56:05 -- common/autotest_common.sh@905 -- # return 0 00:15:41.764 04:56:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:42.024 [2024-11-18 04:56:05.358419] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.024 [2024-11-18 04:56:05.360540] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.024 [2024-11-18 04:56:05.360628] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.024 [2024-11-18 04:56:05.360644] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.024 [2024-11-18 04:56:05.360659] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.024 04:56:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.283 04:56:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.283 "name": "Existed_Raid", 00:15:42.283 "uuid": "98bd5bc1-653a-42d7-b1aa-ef710335b827", 00:15:42.283 "strip_size_kb": 64, 00:15:42.283 "state": "configuring", 00:15:42.283 "raid_level": "raid0", 00:15:42.283 "superblock": true, 00:15:42.283 "num_base_bdevs": 3, 00:15:42.283 "num_base_bdevs_discovered": 1, 00:15:42.283 "num_base_bdevs_operational": 3, 00:15:42.283 "base_bdevs_list": [ 00:15:42.283 { 00:15:42.283 "name": "BaseBdev1", 00:15:42.283 "uuid": "6b089986-2056-49d5-9d4b-995158e57186", 00:15:42.283 "is_configured": true, 00:15:42.283 "data_offset": 2048, 00:15:42.283 "data_size": 63488 00:15:42.283 }, 00:15:42.283 { 00:15:42.283 "name": "BaseBdev2", 00:15:42.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.283 "is_configured": false, 00:15:42.283 "data_offset": 0, 00:15:42.283 "data_size": 0 00:15:42.283 }, 00:15:42.283 { 00:15:42.283 "name": "BaseBdev3", 00:15:42.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.283 "is_configured": false, 00:15:42.283 "data_offset": 0, 00:15:42.283 "data_size": 0 00:15:42.283 } 00:15:42.283 ] 00:15:42.283 }' 00:15:42.283 04:56:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.283 04:56:05 -- common/autotest_common.sh@10 -- # set +x 00:15:42.542 04:56:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:42.800 [2024-11-18 04:56:06.166136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.800 BaseBdev2 00:15:42.800 04:56:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:42.800 04:56:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:42.800 04:56:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:42.800 04:56:06 -- common/autotest_common.sh@899 -- # local i 00:15:42.800 04:56:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:42.800 04:56:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:42.800 04:56:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.059 04:56:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:43.318 [ 00:15:43.318 { 00:15:43.318 "name": "BaseBdev2", 00:15:43.318 "aliases": [ 00:15:43.318 "79748cc2-0966-43fc-ba64-f9a5cb38b22d" 00:15:43.318 ], 00:15:43.318 "product_name": "Malloc disk", 00:15:43.318 "block_size": 512, 00:15:43.318 "num_blocks": 65536, 00:15:43.318 "uuid": "79748cc2-0966-43fc-ba64-f9a5cb38b22d", 00:15:43.318 "assigned_rate_limits": { 00:15:43.318 "rw_ios_per_sec": 0, 00:15:43.318 "rw_mbytes_per_sec": 0, 00:15:43.318 "r_mbytes_per_sec": 0, 00:15:43.318 "w_mbytes_per_sec": 0 00:15:43.318 }, 00:15:43.318 "claimed": true, 00:15:43.318 "claim_type": "exclusive_write", 00:15:43.318 "zoned": false, 00:15:43.318 "supported_io_types": { 00:15:43.318 "read": true, 00:15:43.318 "write": true, 00:15:43.318 "unmap": true, 00:15:43.318 "write_zeroes": true, 00:15:43.318 "flush": true, 00:15:43.318 "reset": true, 00:15:43.318 "compare": false, 00:15:43.318 "compare_and_write": false, 00:15:43.318 "abort": true, 00:15:43.318 "nvme_admin": false, 00:15:43.318 "nvme_io": false 00:15:43.318 }, 00:15:43.318 "memory_domains": [ 00:15:43.318 { 00:15:43.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.318 "dma_device_type": 2 00:15:43.318 } 00:15:43.318 ], 00:15:43.318 "driver_specific": {} 00:15:43.318 } 00:15:43.318 ] 00:15:43.318 04:56:06 -- common/autotest_common.sh@905 -- # return 0 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.318 04:56:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.577 04:56:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.577 "name": "Existed_Raid", 00:15:43.577 "uuid": "98bd5bc1-653a-42d7-b1aa-ef710335b827", 00:15:43.577 "strip_size_kb": 64, 00:15:43.577 "state": "configuring", 00:15:43.577 "raid_level": "raid0", 00:15:43.577 "superblock": true, 00:15:43.577 "num_base_bdevs": 3, 00:15:43.577 "num_base_bdevs_discovered": 2, 00:15:43.577 "num_base_bdevs_operational": 3, 00:15:43.577 "base_bdevs_list": [ 00:15:43.577 { 00:15:43.577 "name": "BaseBdev1", 00:15:43.577 "uuid": "6b089986-2056-49d5-9d4b-995158e57186", 00:15:43.577 "is_configured": true, 00:15:43.577 "data_offset": 2048, 00:15:43.577 "data_size": 63488 00:15:43.577 }, 00:15:43.577 { 00:15:43.577 "name": "BaseBdev2", 00:15:43.577 "uuid": "79748cc2-0966-43fc-ba64-f9a5cb38b22d", 00:15:43.577 "is_configured": true, 00:15:43.577 "data_offset": 2048, 00:15:43.577 "data_size": 63488 00:15:43.577 }, 00:15:43.577 { 00:15:43.577 "name": "BaseBdev3", 00:15:43.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.577 "is_configured": false, 00:15:43.577 "data_offset": 0, 00:15:43.577 "data_size": 0 00:15:43.577 } 00:15:43.577 ] 00:15:43.577 }' 00:15:43.577 04:56:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.577 04:56:06 -- common/autotest_common.sh@10 -- # set +x 00:15:43.837 04:56:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.096 [2024-11-18 04:56:07.452030] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.096 [2024-11-18 04:56:07.452324] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:15:44.096 [2024-11-18 04:56:07.452348] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:44.096 [2024-11-18 04:56:07.452493] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:15:44.096 [2024-11-18 04:56:07.452854] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:15:44.096 [2024-11-18 04:56:07.452882] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:15:44.096 [2024-11-18 04:56:07.453042] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.096 BaseBdev3 00:15:44.096 04:56:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:44.096 04:56:07 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:44.096 04:56:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:44.096 04:56:07 -- common/autotest_common.sh@899 -- # local i 00:15:44.096 04:56:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:44.096 04:56:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:44.096 04:56:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.355 04:56:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.615 [ 00:15:44.615 { 00:15:44.615 "name": "BaseBdev3", 00:15:44.615 "aliases": [ 00:15:44.615 "66b8cfe7-22d8-484e-8ef7-85e33450932b" 00:15:44.615 ], 00:15:44.615 "product_name": "Malloc disk", 00:15:44.615 "block_size": 512, 00:15:44.615 "num_blocks": 65536, 00:15:44.615 "uuid": "66b8cfe7-22d8-484e-8ef7-85e33450932b", 00:15:44.615 "assigned_rate_limits": { 00:15:44.615 "rw_ios_per_sec": 0, 00:15:44.615 "rw_mbytes_per_sec": 0, 00:15:44.615 "r_mbytes_per_sec": 0, 00:15:44.615 "w_mbytes_per_sec": 0 00:15:44.615 }, 00:15:44.615 "claimed": true, 00:15:44.615 "claim_type": "exclusive_write", 00:15:44.615 "zoned": false, 00:15:44.615 "supported_io_types": { 00:15:44.615 "read": true, 00:15:44.615 "write": true, 00:15:44.615 "unmap": true, 00:15:44.615 "write_zeroes": true, 00:15:44.615 "flush": true, 00:15:44.615 "reset": true, 00:15:44.615 "compare": false, 00:15:44.615 "compare_and_write": false, 00:15:44.615 "abort": true, 00:15:44.615 "nvme_admin": false, 00:15:44.615 "nvme_io": false 00:15:44.615 }, 00:15:44.615 "memory_domains": [ 00:15:44.615 { 00:15:44.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.615 "dma_device_type": 2 00:15:44.615 } 00:15:44.615 ], 00:15:44.615 "driver_specific": {} 00:15:44.615 } 00:15:44.615 ] 00:15:44.615 04:56:07 -- common/autotest_common.sh@905 -- # return 0 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:44.615 04:56:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:44.615 04:56:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.615 04:56:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.875 04:56:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.875 "name": "Existed_Raid", 00:15:44.875 "uuid": "98bd5bc1-653a-42d7-b1aa-ef710335b827", 00:15:44.875 "strip_size_kb": 64, 00:15:44.875 "state": "online", 00:15:44.875 "raid_level": "raid0", 00:15:44.875 "superblock": true, 00:15:44.875 "num_base_bdevs": 3, 00:15:44.875 "num_base_bdevs_discovered": 3, 00:15:44.875 "num_base_bdevs_operational": 3, 00:15:44.875 "base_bdevs_list": [ 00:15:44.875 { 00:15:44.875 "name": "BaseBdev1", 00:15:44.875 "uuid": "6b089986-2056-49d5-9d4b-995158e57186", 00:15:44.875 "is_configured": true, 00:15:44.875 "data_offset": 2048, 00:15:44.875 "data_size": 63488 00:15:44.875 }, 00:15:44.875 { 00:15:44.875 "name": "BaseBdev2", 00:15:44.875 "uuid": "79748cc2-0966-43fc-ba64-f9a5cb38b22d", 00:15:44.875 "is_configured": true, 00:15:44.875 "data_offset": 2048, 00:15:44.875 "data_size": 63488 00:15:44.875 }, 00:15:44.875 { 00:15:44.875 "name": "BaseBdev3", 00:15:44.875 "uuid": "66b8cfe7-22d8-484e-8ef7-85e33450932b", 00:15:44.875 "is_configured": true, 00:15:44.875 "data_offset": 2048, 00:15:44.875 "data_size": 63488 00:15:44.875 } 00:15:44.875 ] 00:15:44.875 }' 00:15:44.875 04:56:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.875 04:56:08 -- common/autotest_common.sh@10 -- # set +x 00:15:45.134 04:56:08 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:45.393 [2024-11-18 04:56:08.736493] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.393 [2024-11-18 04:56:08.736555] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.393 [2024-11-18 04:56:08.736639] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.393 04:56:08 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:45.393 04:56:08 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:45.393 04:56:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.394 04:56:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.653 04:56:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.653 "name": "Existed_Raid", 00:15:45.653 "uuid": "98bd5bc1-653a-42d7-b1aa-ef710335b827", 00:15:45.653 "strip_size_kb": 64, 00:15:45.653 "state": "offline", 00:15:45.653 "raid_level": "raid0", 00:15:45.653 "superblock": true, 00:15:45.653 "num_base_bdevs": 3, 00:15:45.653 "num_base_bdevs_discovered": 2, 00:15:45.653 "num_base_bdevs_operational": 2, 00:15:45.653 "base_bdevs_list": [ 00:15:45.653 { 00:15:45.653 "name": null, 00:15:45.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.653 "is_configured": false, 00:15:45.653 "data_offset": 2048, 00:15:45.653 "data_size": 63488 00:15:45.653 }, 00:15:45.653 { 00:15:45.653 "name": "BaseBdev2", 00:15:45.653 "uuid": "79748cc2-0966-43fc-ba64-f9a5cb38b22d", 00:15:45.653 "is_configured": true, 00:15:45.653 "data_offset": 2048, 00:15:45.653 "data_size": 63488 00:15:45.653 }, 00:15:45.653 { 00:15:45.653 "name": "BaseBdev3", 00:15:45.653 "uuid": "66b8cfe7-22d8-484e-8ef7-85e33450932b", 00:15:45.653 "is_configured": true, 00:15:45.653 "data_offset": 2048, 00:15:45.653 "data_size": 63488 00:15:45.653 } 00:15:45.653 ] 00:15:45.653 }' 00:15:45.653 04:56:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.653 04:56:09 -- common/autotest_common.sh@10 -- # set +x 00:15:45.911 04:56:09 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:45.911 04:56:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:45.911 04:56:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.912 04:56:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:46.170 04:56:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:46.170 04:56:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.170 04:56:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:46.430 [2024-11-18 04:56:09.842146] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.430 04:56:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:46.430 04:56:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:46.430 04:56:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.430 04:56:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:46.688 04:56:10 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:46.688 04:56:10 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.688 04:56:10 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:46.950 [2024-11-18 04:56:10.330766] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:46.950 [2024-11-18 04:56:10.330872] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:15:46.950 04:56:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:46.950 04:56:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:46.950 04:56:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:46.950 04:56:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.209 04:56:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:47.209 04:56:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:47.209 04:56:10 -- bdev/bdev_raid.sh@287 -- # killprocess 71758 00:15:47.209 04:56:10 -- common/autotest_common.sh@936 -- # '[' -z 71758 ']' 00:15:47.209 04:56:10 -- common/autotest_common.sh@940 -- # kill -0 71758 00:15:47.209 04:56:10 -- common/autotest_common.sh@941 -- # uname 00:15:47.210 04:56:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:47.210 04:56:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71758 00:15:47.210 killing process with pid 71758 00:15:47.210 04:56:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:47.210 04:56:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:47.210 04:56:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71758' 00:15:47.210 04:56:10 -- common/autotest_common.sh@955 -- # kill 71758 00:15:47.210 04:56:10 -- common/autotest_common.sh@960 -- # wait 71758 00:15:47.210 [2024-11-18 04:56:10.709016] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.210 [2024-11-18 04:56:10.709529] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.588 ************************************ 00:15:48.588 END TEST raid_state_function_test_sb 00:15:48.588 ************************************ 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:48.588 00:15:48.588 real 0m10.942s 00:15:48.588 user 0m18.186s 00:15:48.588 sys 0m1.607s 00:15:48.588 04:56:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:48.588 04:56:11 -- common/autotest_common.sh@10 -- # set +x 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:48.588 04:56:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:48.588 04:56:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.588 04:56:11 -- common/autotest_common.sh@10 -- # set +x 00:15:48.588 ************************************ 00:15:48.588 START TEST raid_superblock_test 00:15:48.588 ************************************ 00:15:48.588 04:56:11 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 3 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@357 -- # raid_pid=72113 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@358 -- # waitforlisten 72113 /var/tmp/spdk-raid.sock 00:15:48.588 04:56:11 -- common/autotest_common.sh@829 -- # '[' -z 72113 ']' 00:15:48.588 04:56:11 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:48.588 04:56:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:48.588 04:56:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:48.588 04:56:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:48.588 04:56:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.588 04:56:11 -- common/autotest_common.sh@10 -- # set +x 00:15:48.588 [2024-11-18 04:56:11.892936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:48.588 [2024-11-18 04:56:11.893119] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72113 ] 00:15:48.588 [2024-11-18 04:56:12.060978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.847 [2024-11-18 04:56:12.294333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.105 [2024-11-18 04:56:12.454413] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.364 04:56:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.364 04:56:12 -- common/autotest_common.sh@862 -- # return 0 00:15:49.364 04:56:12 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:49.364 04:56:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:49.364 04:56:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:49.364 04:56:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:49.364 04:56:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:49.364 04:56:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.364 04:56:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.364 04:56:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.364 04:56:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:49.622 malloc1 00:15:49.622 04:56:13 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.881 [2024-11-18 04:56:13.280004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.881 [2024-11-18 04:56:13.280109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.881 [2024-11-18 04:56:13.280147] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:15:49.881 [2024-11-18 04:56:13.280160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.881 [2024-11-18 04:56:13.282745] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.881 [2024-11-18 04:56:13.282803] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.881 pt1 00:15:49.881 04:56:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:49.881 04:56:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:49.881 04:56:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:49.881 04:56:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:49.881 04:56:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:49.881 04:56:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.881 04:56:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.881 04:56:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.881 04:56:13 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:50.140 malloc2 00:15:50.140 04:56:13 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.400 [2024-11-18 04:56:13.768547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.400 [2024-11-18 04:56:13.768686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.400 [2024-11-18 04:56:13.768739] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:15:50.400 [2024-11-18 04:56:13.768759] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.400 [2024-11-18 04:56:13.771632] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.400 [2024-11-18 04:56:13.771703] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.400 pt2 00:15:50.400 04:56:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:50.400 04:56:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:50.400 04:56:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:50.400 04:56:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:50.400 04:56:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:50.400 04:56:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.400 04:56:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.400 04:56:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.400 04:56:13 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:50.659 malloc3 00:15:50.659 04:56:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:50.919 [2024-11-18 04:56:14.238925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:50.919 [2024-11-18 04:56:14.239008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.919 [2024-11-18 04:56:14.239042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:15:50.919 [2024-11-18 04:56:14.239056] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.919 [2024-11-18 04:56:14.241358] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.919 [2024-11-18 04:56:14.241412] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:50.919 pt3 00:15:50.919 04:56:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:50.919 04:56:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:50.919 04:56:14 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:51.178 [2024-11-18 04:56:14.447048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.178 [2024-11-18 04:56:14.449143] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.178 [2024-11-18 04:56:14.449256] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.178 [2024-11-18 04:56:14.449537] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:15:51.178 [2024-11-18 04:56:14.449574] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:51.178 [2024-11-18 04:56:14.449722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:51.178 [2024-11-18 04:56:14.450113] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:15:51.178 [2024-11-18 04:56:14.450141] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:15:51.178 [2024-11-18 04:56:14.450374] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.178 04:56:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.437 04:56:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.437 "name": "raid_bdev1", 00:15:51.437 "uuid": "308fd6c9-e79e-47ef-8f21-a610a3cdd66c", 00:15:51.437 "strip_size_kb": 64, 00:15:51.437 "state": "online", 00:15:51.437 "raid_level": "raid0", 00:15:51.437 "superblock": true, 00:15:51.437 "num_base_bdevs": 3, 00:15:51.437 "num_base_bdevs_discovered": 3, 00:15:51.437 "num_base_bdevs_operational": 3, 00:15:51.437 "base_bdevs_list": [ 00:15:51.437 { 00:15:51.437 "name": "pt1", 00:15:51.437 "uuid": "d2ed89e1-7397-510c-b33e-d2164e6c0fd3", 00:15:51.437 "is_configured": true, 00:15:51.437 "data_offset": 2048, 00:15:51.437 "data_size": 63488 00:15:51.437 }, 00:15:51.437 { 00:15:51.437 "name": "pt2", 00:15:51.437 "uuid": "d95643e9-a91f-552c-bfbd-17851a59fcc2", 00:15:51.437 "is_configured": true, 00:15:51.437 "data_offset": 2048, 00:15:51.437 "data_size": 63488 00:15:51.437 }, 00:15:51.437 { 00:15:51.437 "name": "pt3", 00:15:51.437 "uuid": "85ad0076-b4e3-5121-a511-26f98703179c", 00:15:51.437 "is_configured": true, 00:15:51.437 "data_offset": 2048, 00:15:51.437 "data_size": 63488 00:15:51.437 } 00:15:51.437 ] 00:15:51.437 }' 00:15:51.437 04:56:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.437 04:56:14 -- common/autotest_common.sh@10 -- # set +x 00:15:51.696 04:56:15 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:51.696 04:56:15 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:51.954 [2024-11-18 04:56:15.223491] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.954 04:56:15 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=308fd6c9-e79e-47ef-8f21-a610a3cdd66c 00:15:51.954 04:56:15 -- bdev/bdev_raid.sh@380 -- # '[' -z 308fd6c9-e79e-47ef-8f21-a610a3cdd66c ']' 00:15:51.954 04:56:15 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:51.954 [2024-11-18 04:56:15.475336] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.954 [2024-11-18 04:56:15.475372] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.954 [2024-11-18 04:56:15.475475] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.954 [2024-11-18 04:56:15.475563] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.954 [2024-11-18 04:56:15.475580] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:15:52.214 04:56:15 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.214 04:56:15 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:52.214 04:56:15 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:52.214 04:56:15 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:52.214 04:56:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:52.214 04:56:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:52.473 04:56:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:52.473 04:56:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:52.731 04:56:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:52.731 04:56:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:52.990 04:56:16 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:52.990 04:56:16 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:53.249 04:56:16 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:53.249 04:56:16 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:53.249 04:56:16 -- common/autotest_common.sh@650 -- # local es=0 00:15:53.249 04:56:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:53.249 04:56:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.249 04:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.249 04:56:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.249 04:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.249 04:56:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.249 04:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.249 04:56:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.249 04:56:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:53.249 04:56:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:53.509 [2024-11-18 04:56:16.811637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:53.509 [2024-11-18 04:56:16.813679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:53.509 [2024-11-18 04:56:16.813753] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:53.509 [2024-11-18 04:56:16.813814] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:53.509 [2024-11-18 04:56:16.813901] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:53.509 [2024-11-18 04:56:16.813933] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:53.509 [2024-11-18 04:56:16.813953] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.509 [2024-11-18 04:56:16.813970] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:15:53.509 request: 00:15:53.509 { 00:15:53.509 "name": "raid_bdev1", 00:15:53.509 "raid_level": "raid0", 00:15:53.509 "base_bdevs": [ 00:15:53.509 "malloc1", 00:15:53.509 "malloc2", 00:15:53.509 "malloc3" 00:15:53.509 ], 00:15:53.509 "superblock": false, 00:15:53.509 "strip_size_kb": 64, 00:15:53.509 "method": "bdev_raid_create", 00:15:53.509 "req_id": 1 00:15:53.509 } 00:15:53.509 Got JSON-RPC error response 00:15:53.509 response: 00:15:53.509 { 00:15:53.509 "code": -17, 00:15:53.509 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:53.509 } 00:15:53.509 04:56:16 -- common/autotest_common.sh@653 -- # es=1 00:15:53.509 04:56:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:53.509 04:56:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:53.509 04:56:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:53.509 04:56:16 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.509 04:56:16 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.768 [2024-11-18 04:56:17.259746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.768 [2024-11-18 04:56:17.259827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.768 [2024-11-18 04:56:17.259885] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:15:53.768 [2024-11-18 04:56:17.259916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.768 [2024-11-18 04:56:17.262455] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.768 [2024-11-18 04:56:17.262517] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:53.768 [2024-11-18 04:56:17.262624] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:53.768 [2024-11-18 04:56:17.262695] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:53.768 pt1 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.768 04:56:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.065 04:56:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.065 "name": "raid_bdev1", 00:15:54.065 "uuid": "308fd6c9-e79e-47ef-8f21-a610a3cdd66c", 00:15:54.065 "strip_size_kb": 64, 00:15:54.065 "state": "configuring", 00:15:54.065 "raid_level": "raid0", 00:15:54.065 "superblock": true, 00:15:54.065 "num_base_bdevs": 3, 00:15:54.065 "num_base_bdevs_discovered": 1, 00:15:54.065 "num_base_bdevs_operational": 3, 00:15:54.065 "base_bdevs_list": [ 00:15:54.065 { 00:15:54.065 "name": "pt1", 00:15:54.065 "uuid": "d2ed89e1-7397-510c-b33e-d2164e6c0fd3", 00:15:54.065 "is_configured": true, 00:15:54.065 "data_offset": 2048, 00:15:54.065 "data_size": 63488 00:15:54.065 }, 00:15:54.065 { 00:15:54.065 "name": null, 00:15:54.065 "uuid": "d95643e9-a91f-552c-bfbd-17851a59fcc2", 00:15:54.065 "is_configured": false, 00:15:54.065 "data_offset": 2048, 00:15:54.065 "data_size": 63488 00:15:54.065 }, 00:15:54.065 { 00:15:54.065 "name": null, 00:15:54.065 "uuid": "85ad0076-b4e3-5121-a511-26f98703179c", 00:15:54.065 "is_configured": false, 00:15:54.065 "data_offset": 2048, 00:15:54.065 "data_size": 63488 00:15:54.065 } 00:15:54.065 ] 00:15:54.065 }' 00:15:54.065 04:56:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.065 04:56:17 -- common/autotest_common.sh@10 -- # set +x 00:15:54.347 04:56:17 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:54.347 04:56:17 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.606 [2024-11-18 04:56:18.039957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.606 [2024-11-18 04:56:18.040065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.606 [2024-11-18 04:56:18.040094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:15:54.606 [2024-11-18 04:56:18.040110] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.606 [2024-11-18 04:56:18.040670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.606 [2024-11-18 04:56:18.040739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.606 [2024-11-18 04:56:18.040840] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:54.606 [2024-11-18 04:56:18.040900] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.606 pt2 00:15:54.606 04:56:18 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:54.865 [2024-11-18 04:56:18.300053] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.865 04:56:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.124 04:56:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.124 "name": "raid_bdev1", 00:15:55.124 "uuid": "308fd6c9-e79e-47ef-8f21-a610a3cdd66c", 00:15:55.124 "strip_size_kb": 64, 00:15:55.124 "state": "configuring", 00:15:55.124 "raid_level": "raid0", 00:15:55.124 "superblock": true, 00:15:55.124 "num_base_bdevs": 3, 00:15:55.124 "num_base_bdevs_discovered": 1, 00:15:55.124 "num_base_bdevs_operational": 3, 00:15:55.124 "base_bdevs_list": [ 00:15:55.124 { 00:15:55.124 "name": "pt1", 00:15:55.124 "uuid": "d2ed89e1-7397-510c-b33e-d2164e6c0fd3", 00:15:55.124 "is_configured": true, 00:15:55.124 "data_offset": 2048, 00:15:55.124 "data_size": 63488 00:15:55.124 }, 00:15:55.124 { 00:15:55.124 "name": null, 00:15:55.124 "uuid": "d95643e9-a91f-552c-bfbd-17851a59fcc2", 00:15:55.124 "is_configured": false, 00:15:55.124 "data_offset": 2048, 00:15:55.124 "data_size": 63488 00:15:55.124 }, 00:15:55.124 { 00:15:55.124 "name": null, 00:15:55.124 "uuid": "85ad0076-b4e3-5121-a511-26f98703179c", 00:15:55.124 "is_configured": false, 00:15:55.124 "data_offset": 2048, 00:15:55.124 "data_size": 63488 00:15:55.124 } 00:15:55.124 ] 00:15:55.124 }' 00:15:55.124 04:56:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.124 04:56:18 -- common/autotest_common.sh@10 -- # set +x 00:15:55.384 04:56:18 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:55.384 04:56:18 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:55.384 04:56:18 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.643 [2024-11-18 04:56:19.052207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.643 [2024-11-18 04:56:19.052324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.643 [2024-11-18 04:56:19.052354] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:15:55.643 [2024-11-18 04:56:19.052367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.643 [2024-11-18 04:56:19.052937] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.643 [2024-11-18 04:56:19.052986] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.643 [2024-11-18 04:56:19.053130] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:55.643 [2024-11-18 04:56:19.053156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.643 pt2 00:15:55.643 04:56:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:55.643 04:56:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:55.643 04:56:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:55.901 [2024-11-18 04:56:19.256274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:55.901 [2024-11-18 04:56:19.256379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.901 [2024-11-18 04:56:19.256410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:15:55.901 [2024-11-18 04:56:19.256423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.901 [2024-11-18 04:56:19.256930] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.901 [2024-11-18 04:56:19.256964] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:55.901 [2024-11-18 04:56:19.257095] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:55.901 [2024-11-18 04:56:19.257137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.901 [2024-11-18 04:56:19.257338] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:15:55.901 [2024-11-18 04:56:19.257354] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:55.901 [2024-11-18 04:56:19.257463] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:15:55.901 [2024-11-18 04:56:19.257836] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:15:55.901 [2024-11-18 04:56:19.257881] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:15:55.901 [2024-11-18 04:56:19.258035] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.901 pt3 00:15:55.901 04:56:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:55.901 04:56:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:55.901 04:56:19 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:55.901 04:56:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:55.901 04:56:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:55.901 04:56:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:55.902 04:56:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:55.902 04:56:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:55.902 04:56:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.902 04:56:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.902 04:56:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.902 04:56:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.902 04:56:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.902 04:56:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.160 04:56:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:56.160 "name": "raid_bdev1", 00:15:56.160 "uuid": "308fd6c9-e79e-47ef-8f21-a610a3cdd66c", 00:15:56.160 "strip_size_kb": 64, 00:15:56.160 "state": "online", 00:15:56.160 "raid_level": "raid0", 00:15:56.160 "superblock": true, 00:15:56.160 "num_base_bdevs": 3, 00:15:56.160 "num_base_bdevs_discovered": 3, 00:15:56.160 "num_base_bdevs_operational": 3, 00:15:56.160 "base_bdevs_list": [ 00:15:56.160 { 00:15:56.160 "name": "pt1", 00:15:56.160 "uuid": "d2ed89e1-7397-510c-b33e-d2164e6c0fd3", 00:15:56.160 "is_configured": true, 00:15:56.160 "data_offset": 2048, 00:15:56.160 "data_size": 63488 00:15:56.160 }, 00:15:56.160 { 00:15:56.160 "name": "pt2", 00:15:56.160 "uuid": "d95643e9-a91f-552c-bfbd-17851a59fcc2", 00:15:56.160 "is_configured": true, 00:15:56.160 "data_offset": 2048, 00:15:56.160 "data_size": 63488 00:15:56.160 }, 00:15:56.160 { 00:15:56.160 "name": "pt3", 00:15:56.160 "uuid": "85ad0076-b4e3-5121-a511-26f98703179c", 00:15:56.160 "is_configured": true, 00:15:56.160 "data_offset": 2048, 00:15:56.160 "data_size": 63488 00:15:56.160 } 00:15:56.160 ] 00:15:56.160 }' 00:15:56.160 04:56:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:56.160 04:56:19 -- common/autotest_common.sh@10 -- # set +x 00:15:56.418 04:56:19 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.418 04:56:19 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:56.677 [2024-11-18 04:56:20.032767] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.677 04:56:20 -- bdev/bdev_raid.sh@430 -- # '[' 308fd6c9-e79e-47ef-8f21-a610a3cdd66c '!=' 308fd6c9-e79e-47ef-8f21-a610a3cdd66c ']' 00:15:56.677 04:56:20 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:56.677 04:56:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:56.677 04:56:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:56.677 04:56:20 -- bdev/bdev_raid.sh@511 -- # killprocess 72113 00:15:56.677 04:56:20 -- common/autotest_common.sh@936 -- # '[' -z 72113 ']' 00:15:56.677 04:56:20 -- common/autotest_common.sh@940 -- # kill -0 72113 00:15:56.677 04:56:20 -- common/autotest_common.sh@941 -- # uname 00:15:56.677 04:56:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:56.677 04:56:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72113 00:15:56.677 killing process with pid 72113 00:15:56.677 04:56:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:56.677 04:56:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:56.677 04:56:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72113' 00:15:56.677 04:56:20 -- common/autotest_common.sh@955 -- # kill 72113 00:15:56.677 [2024-11-18 04:56:20.085157] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.677 04:56:20 -- common/autotest_common.sh@960 -- # wait 72113 00:15:56.677 [2024-11-18 04:56:20.085300] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.677 [2024-11-18 04:56:20.085396] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.677 [2024-11-18 04:56:20.085413] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:15:56.936 [2024-11-18 04:56:20.290936] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:57.874 00:15:57.874 real 0m9.479s 00:15:57.874 user 0m15.656s 00:15:57.874 sys 0m1.340s 00:15:57.874 04:56:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:57.874 ************************************ 00:15:57.874 04:56:21 -- common/autotest_common.sh@10 -- # set +x 00:15:57.874 END TEST raid_superblock_test 00:15:57.874 ************************************ 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:57.874 04:56:21 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:57.874 04:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.874 04:56:21 -- common/autotest_common.sh@10 -- # set +x 00:15:57.874 ************************************ 00:15:57.874 START TEST raid_state_function_test 00:15:57.874 ************************************ 00:15:57.874 04:56:21 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 false 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=72389 00:15:57.874 Process raid pid: 72389 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 72389' 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 72389 /var/tmp/spdk-raid.sock 00:15:57.874 04:56:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:57.874 04:56:21 -- common/autotest_common.sh@829 -- # '[' -z 72389 ']' 00:15:57.874 04:56:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:57.874 04:56:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:57.874 04:56:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:57.874 04:56:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.874 04:56:21 -- common/autotest_common.sh@10 -- # set +x 00:15:58.134 [2024-11-18 04:56:21.435652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:58.134 [2024-11-18 04:56:21.435823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.134 [2024-11-18 04:56:21.602626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.393 [2024-11-18 04:56:21.774499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.652 [2024-11-18 04:56:21.944924] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.911 04:56:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.911 04:56:22 -- common/autotest_common.sh@862 -- # return 0 00:15:58.911 04:56:22 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:59.170 [2024-11-18 04:56:22.553732] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.170 [2024-11-18 04:56:22.553817] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.170 [2024-11-18 04:56:22.553831] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.170 [2024-11-18 04:56:22.553844] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.170 [2024-11-18 04:56:22.553853] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:59.170 [2024-11-18 04:56:22.553864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.170 04:56:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.429 04:56:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.429 "name": "Existed_Raid", 00:15:59.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.429 "strip_size_kb": 64, 00:15:59.429 "state": "configuring", 00:15:59.429 "raid_level": "concat", 00:15:59.429 "superblock": false, 00:15:59.429 "num_base_bdevs": 3, 00:15:59.429 "num_base_bdevs_discovered": 0, 00:15:59.429 "num_base_bdevs_operational": 3, 00:15:59.429 "base_bdevs_list": [ 00:15:59.429 { 00:15:59.429 "name": "BaseBdev1", 00:15:59.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.429 "is_configured": false, 00:15:59.429 "data_offset": 0, 00:15:59.429 "data_size": 0 00:15:59.429 }, 00:15:59.429 { 00:15:59.429 "name": "BaseBdev2", 00:15:59.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.429 "is_configured": false, 00:15:59.429 "data_offset": 0, 00:15:59.429 "data_size": 0 00:15:59.429 }, 00:15:59.429 { 00:15:59.429 "name": "BaseBdev3", 00:15:59.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.429 "is_configured": false, 00:15:59.429 "data_offset": 0, 00:15:59.429 "data_size": 0 00:15:59.429 } 00:15:59.429 ] 00:15:59.429 }' 00:15:59.429 04:56:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.429 04:56:22 -- common/autotest_common.sh@10 -- # set +x 00:15:59.688 04:56:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:59.947 [2024-11-18 04:56:23.265864] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.947 [2024-11-18 04:56:23.265932] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:59.947 04:56:23 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:00.206 [2024-11-18 04:56:23.513952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.206 [2024-11-18 04:56:23.514051] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.207 [2024-11-18 04:56:23.514065] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.207 [2024-11-18 04:56:23.514081] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.207 [2024-11-18 04:56:23.514089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.207 [2024-11-18 04:56:23.514101] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.207 04:56:23 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.465 [2024-11-18 04:56:23.788874] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.465 BaseBdev1 00:16:00.465 04:56:23 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:00.465 04:56:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:00.465 04:56:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:00.465 04:56:23 -- common/autotest_common.sh@899 -- # local i 00:16:00.465 04:56:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:00.465 04:56:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:00.465 04:56:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.725 04:56:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.725 [ 00:16:00.725 { 00:16:00.725 "name": "BaseBdev1", 00:16:00.725 "aliases": [ 00:16:00.725 "847ec198-b0cb-4460-831a-33cca77c7e04" 00:16:00.725 ], 00:16:00.725 "product_name": "Malloc disk", 00:16:00.725 "block_size": 512, 00:16:00.725 "num_blocks": 65536, 00:16:00.725 "uuid": "847ec198-b0cb-4460-831a-33cca77c7e04", 00:16:00.725 "assigned_rate_limits": { 00:16:00.725 "rw_ios_per_sec": 0, 00:16:00.725 "rw_mbytes_per_sec": 0, 00:16:00.725 "r_mbytes_per_sec": 0, 00:16:00.725 "w_mbytes_per_sec": 0 00:16:00.725 }, 00:16:00.725 "claimed": true, 00:16:00.725 "claim_type": "exclusive_write", 00:16:00.725 "zoned": false, 00:16:00.725 "supported_io_types": { 00:16:00.725 "read": true, 00:16:00.725 "write": true, 00:16:00.725 "unmap": true, 00:16:00.725 "write_zeroes": true, 00:16:00.725 "flush": true, 00:16:00.725 "reset": true, 00:16:00.725 "compare": false, 00:16:00.725 "compare_and_write": false, 00:16:00.725 "abort": true, 00:16:00.725 "nvme_admin": false, 00:16:00.725 "nvme_io": false 00:16:00.725 }, 00:16:00.725 "memory_domains": [ 00:16:00.725 { 00:16:00.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.725 "dma_device_type": 2 00:16:00.725 } 00:16:00.725 ], 00:16:00.725 "driver_specific": {} 00:16:00.725 } 00:16:00.725 ] 00:16:00.725 04:56:24 -- common/autotest_common.sh@905 -- # return 0 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.725 04:56:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.984 04:56:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.984 "name": "Existed_Raid", 00:16:00.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.984 "strip_size_kb": 64, 00:16:00.984 "state": "configuring", 00:16:00.984 "raid_level": "concat", 00:16:00.984 "superblock": false, 00:16:00.984 "num_base_bdevs": 3, 00:16:00.984 "num_base_bdevs_discovered": 1, 00:16:00.984 "num_base_bdevs_operational": 3, 00:16:00.984 "base_bdevs_list": [ 00:16:00.984 { 00:16:00.984 "name": "BaseBdev1", 00:16:00.984 "uuid": "847ec198-b0cb-4460-831a-33cca77c7e04", 00:16:00.984 "is_configured": true, 00:16:00.984 "data_offset": 0, 00:16:00.984 "data_size": 65536 00:16:00.984 }, 00:16:00.984 { 00:16:00.984 "name": "BaseBdev2", 00:16:00.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.984 "is_configured": false, 00:16:00.984 "data_offset": 0, 00:16:00.984 "data_size": 0 00:16:00.984 }, 00:16:00.984 { 00:16:00.984 "name": "BaseBdev3", 00:16:00.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.984 "is_configured": false, 00:16:00.984 "data_offset": 0, 00:16:00.984 "data_size": 0 00:16:00.984 } 00:16:00.984 ] 00:16:00.984 }' 00:16:00.984 04:56:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.984 04:56:24 -- common/autotest_common.sh@10 -- # set +x 00:16:01.246 04:56:24 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:01.505 [2024-11-18 04:56:24.921372] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.505 [2024-11-18 04:56:24.921450] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:01.505 04:56:24 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:01.505 04:56:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:01.764 [2024-11-18 04:56:25.173483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.764 [2024-11-18 04:56:25.175696] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.764 [2024-11-18 04:56:25.175781] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.764 [2024-11-18 04:56:25.175796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.764 [2024-11-18 04:56:25.175811] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.764 04:56:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.765 04:56:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.765 04:56:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.765 04:56:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.024 04:56:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.024 "name": "Existed_Raid", 00:16:02.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.024 "strip_size_kb": 64, 00:16:02.024 "state": "configuring", 00:16:02.024 "raid_level": "concat", 00:16:02.024 "superblock": false, 00:16:02.024 "num_base_bdevs": 3, 00:16:02.024 "num_base_bdevs_discovered": 1, 00:16:02.024 "num_base_bdevs_operational": 3, 00:16:02.024 "base_bdevs_list": [ 00:16:02.024 { 00:16:02.024 "name": "BaseBdev1", 00:16:02.024 "uuid": "847ec198-b0cb-4460-831a-33cca77c7e04", 00:16:02.024 "is_configured": true, 00:16:02.024 "data_offset": 0, 00:16:02.024 "data_size": 65536 00:16:02.024 }, 00:16:02.024 { 00:16:02.024 "name": "BaseBdev2", 00:16:02.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.024 "is_configured": false, 00:16:02.024 "data_offset": 0, 00:16:02.024 "data_size": 0 00:16:02.024 }, 00:16:02.024 { 00:16:02.024 "name": "BaseBdev3", 00:16:02.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.024 "is_configured": false, 00:16:02.024 "data_offset": 0, 00:16:02.024 "data_size": 0 00:16:02.024 } 00:16:02.024 ] 00:16:02.024 }' 00:16:02.024 04:56:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.024 04:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:02.283 04:56:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:02.542 [2024-11-18 04:56:25.954791] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.542 BaseBdev2 00:16:02.542 04:56:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:02.542 04:56:25 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:02.542 04:56:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:02.542 04:56:25 -- common/autotest_common.sh@899 -- # local i 00:16:02.542 04:56:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:02.542 04:56:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:02.542 04:56:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.801 04:56:26 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.060 [ 00:16:03.060 { 00:16:03.060 "name": "BaseBdev2", 00:16:03.060 "aliases": [ 00:16:03.060 "f277234b-63e2-4e26-944b-36d82a110fcb" 00:16:03.060 ], 00:16:03.060 "product_name": "Malloc disk", 00:16:03.060 "block_size": 512, 00:16:03.060 "num_blocks": 65536, 00:16:03.060 "uuid": "f277234b-63e2-4e26-944b-36d82a110fcb", 00:16:03.060 "assigned_rate_limits": { 00:16:03.060 "rw_ios_per_sec": 0, 00:16:03.060 "rw_mbytes_per_sec": 0, 00:16:03.060 "r_mbytes_per_sec": 0, 00:16:03.060 "w_mbytes_per_sec": 0 00:16:03.060 }, 00:16:03.060 "claimed": true, 00:16:03.060 "claim_type": "exclusive_write", 00:16:03.060 "zoned": false, 00:16:03.060 "supported_io_types": { 00:16:03.060 "read": true, 00:16:03.060 "write": true, 00:16:03.060 "unmap": true, 00:16:03.060 "write_zeroes": true, 00:16:03.060 "flush": true, 00:16:03.060 "reset": true, 00:16:03.060 "compare": false, 00:16:03.060 "compare_and_write": false, 00:16:03.060 "abort": true, 00:16:03.060 "nvme_admin": false, 00:16:03.060 "nvme_io": false 00:16:03.060 }, 00:16:03.060 "memory_domains": [ 00:16:03.060 { 00:16:03.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.060 "dma_device_type": 2 00:16:03.060 } 00:16:03.060 ], 00:16:03.060 "driver_specific": {} 00:16:03.060 } 00:16:03.060 ] 00:16:03.060 04:56:26 -- common/autotest_common.sh@905 -- # return 0 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.060 04:56:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.319 04:56:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.319 "name": "Existed_Raid", 00:16:03.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.319 "strip_size_kb": 64, 00:16:03.319 "state": "configuring", 00:16:03.319 "raid_level": "concat", 00:16:03.319 "superblock": false, 00:16:03.319 "num_base_bdevs": 3, 00:16:03.319 "num_base_bdevs_discovered": 2, 00:16:03.319 "num_base_bdevs_operational": 3, 00:16:03.319 "base_bdevs_list": [ 00:16:03.319 { 00:16:03.319 "name": "BaseBdev1", 00:16:03.319 "uuid": "847ec198-b0cb-4460-831a-33cca77c7e04", 00:16:03.319 "is_configured": true, 00:16:03.319 "data_offset": 0, 00:16:03.319 "data_size": 65536 00:16:03.319 }, 00:16:03.319 { 00:16:03.319 "name": "BaseBdev2", 00:16:03.319 "uuid": "f277234b-63e2-4e26-944b-36d82a110fcb", 00:16:03.319 "is_configured": true, 00:16:03.319 "data_offset": 0, 00:16:03.319 "data_size": 65536 00:16:03.319 }, 00:16:03.319 { 00:16:03.319 "name": "BaseBdev3", 00:16:03.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.319 "is_configured": false, 00:16:03.319 "data_offset": 0, 00:16:03.319 "data_size": 0 00:16:03.319 } 00:16:03.319 ] 00:16:03.319 }' 00:16:03.319 04:56:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.319 04:56:26 -- common/autotest_common.sh@10 -- # set +x 00:16:03.578 04:56:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:03.837 [2024-11-18 04:56:27.223204] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.837 [2024-11-18 04:56:27.223318] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:16:03.837 [2024-11-18 04:56:27.223335] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:03.837 [2024-11-18 04:56:27.223460] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:03.837 [2024-11-18 04:56:27.223936] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:16:03.837 [2024-11-18 04:56:27.223963] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:16:03.837 [2024-11-18 04:56:27.224229] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.837 BaseBdev3 00:16:03.837 04:56:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:03.837 04:56:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:03.837 04:56:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:03.837 04:56:27 -- common/autotest_common.sh@899 -- # local i 00:16:03.837 04:56:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:03.837 04:56:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:03.837 04:56:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.096 04:56:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:04.355 [ 00:16:04.355 { 00:16:04.355 "name": "BaseBdev3", 00:16:04.355 "aliases": [ 00:16:04.355 "0d4f6056-d398-4787-9118-a14b9897d0c2" 00:16:04.355 ], 00:16:04.355 "product_name": "Malloc disk", 00:16:04.355 "block_size": 512, 00:16:04.355 "num_blocks": 65536, 00:16:04.355 "uuid": "0d4f6056-d398-4787-9118-a14b9897d0c2", 00:16:04.355 "assigned_rate_limits": { 00:16:04.355 "rw_ios_per_sec": 0, 00:16:04.355 "rw_mbytes_per_sec": 0, 00:16:04.355 "r_mbytes_per_sec": 0, 00:16:04.355 "w_mbytes_per_sec": 0 00:16:04.355 }, 00:16:04.355 "claimed": true, 00:16:04.355 "claim_type": "exclusive_write", 00:16:04.355 "zoned": false, 00:16:04.355 "supported_io_types": { 00:16:04.355 "read": true, 00:16:04.355 "write": true, 00:16:04.355 "unmap": true, 00:16:04.355 "write_zeroes": true, 00:16:04.355 "flush": true, 00:16:04.355 "reset": true, 00:16:04.355 "compare": false, 00:16:04.355 "compare_and_write": false, 00:16:04.355 "abort": true, 00:16:04.355 "nvme_admin": false, 00:16:04.355 "nvme_io": false 00:16:04.355 }, 00:16:04.355 "memory_domains": [ 00:16:04.355 { 00:16:04.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.355 "dma_device_type": 2 00:16:04.355 } 00:16:04.355 ], 00:16:04.355 "driver_specific": {} 00:16:04.355 } 00:16:04.355 ] 00:16:04.355 04:56:27 -- common/autotest_common.sh@905 -- # return 0 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.355 04:56:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.614 04:56:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.614 "name": "Existed_Raid", 00:16:04.614 "uuid": "70d4ddd4-8fc4-4889-a8c1-96fd7f27e543", 00:16:04.614 "strip_size_kb": 64, 00:16:04.614 "state": "online", 00:16:04.614 "raid_level": "concat", 00:16:04.614 "superblock": false, 00:16:04.614 "num_base_bdevs": 3, 00:16:04.614 "num_base_bdevs_discovered": 3, 00:16:04.614 "num_base_bdevs_operational": 3, 00:16:04.614 "base_bdevs_list": [ 00:16:04.614 { 00:16:04.614 "name": "BaseBdev1", 00:16:04.614 "uuid": "847ec198-b0cb-4460-831a-33cca77c7e04", 00:16:04.614 "is_configured": true, 00:16:04.614 "data_offset": 0, 00:16:04.614 "data_size": 65536 00:16:04.614 }, 00:16:04.614 { 00:16:04.614 "name": "BaseBdev2", 00:16:04.614 "uuid": "f277234b-63e2-4e26-944b-36d82a110fcb", 00:16:04.614 "is_configured": true, 00:16:04.614 "data_offset": 0, 00:16:04.614 "data_size": 65536 00:16:04.614 }, 00:16:04.614 { 00:16:04.614 "name": "BaseBdev3", 00:16:04.614 "uuid": "0d4f6056-d398-4787-9118-a14b9897d0c2", 00:16:04.614 "is_configured": true, 00:16:04.614 "data_offset": 0, 00:16:04.614 "data_size": 65536 00:16:04.614 } 00:16:04.614 ] 00:16:04.614 }' 00:16:04.614 04:56:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.614 04:56:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.874 04:56:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:05.133 [2024-11-18 04:56:28.459704] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.133 [2024-11-18 04:56:28.459960] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.133 [2024-11-18 04:56:28.460149] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.133 04:56:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.392 04:56:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.392 "name": "Existed_Raid", 00:16:05.392 "uuid": "70d4ddd4-8fc4-4889-a8c1-96fd7f27e543", 00:16:05.392 "strip_size_kb": 64, 00:16:05.392 "state": "offline", 00:16:05.392 "raid_level": "concat", 00:16:05.392 "superblock": false, 00:16:05.392 "num_base_bdevs": 3, 00:16:05.392 "num_base_bdevs_discovered": 2, 00:16:05.392 "num_base_bdevs_operational": 2, 00:16:05.392 "base_bdevs_list": [ 00:16:05.392 { 00:16:05.392 "name": null, 00:16:05.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.392 "is_configured": false, 00:16:05.392 "data_offset": 0, 00:16:05.392 "data_size": 65536 00:16:05.392 }, 00:16:05.392 { 00:16:05.392 "name": "BaseBdev2", 00:16:05.392 "uuid": "f277234b-63e2-4e26-944b-36d82a110fcb", 00:16:05.392 "is_configured": true, 00:16:05.392 "data_offset": 0, 00:16:05.392 "data_size": 65536 00:16:05.392 }, 00:16:05.392 { 00:16:05.392 "name": "BaseBdev3", 00:16:05.392 "uuid": "0d4f6056-d398-4787-9118-a14b9897d0c2", 00:16:05.392 "is_configured": true, 00:16:05.392 "data_offset": 0, 00:16:05.393 "data_size": 65536 00:16:05.393 } 00:16:05.393 ] 00:16:05.393 }' 00:16:05.393 04:56:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.393 04:56:28 -- common/autotest_common.sh@10 -- # set +x 00:16:05.652 04:56:29 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:05.652 04:56:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:05.652 04:56:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.652 04:56:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:05.910 04:56:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:05.910 04:56:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.910 04:56:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:06.169 [2024-11-18 04:56:29.519312] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:06.169 04:56:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:06.169 04:56:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:06.169 04:56:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.169 04:56:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:06.428 04:56:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:06.428 04:56:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:06.428 04:56:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:06.687 [2024-11-18 04:56:30.056878] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:06.687 [2024-11-18 04:56:30.057111] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:16:06.687 04:56:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:06.687 04:56:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:06.687 04:56:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:06.687 04:56:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.950 04:56:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:06.950 04:56:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:06.950 04:56:30 -- bdev/bdev_raid.sh@287 -- # killprocess 72389 00:16:06.950 04:56:30 -- common/autotest_common.sh@936 -- # '[' -z 72389 ']' 00:16:06.950 04:56:30 -- common/autotest_common.sh@940 -- # kill -0 72389 00:16:06.950 04:56:30 -- common/autotest_common.sh@941 -- # uname 00:16:06.950 04:56:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:06.950 04:56:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72389 00:16:06.950 killing process with pid 72389 00:16:06.950 04:56:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:06.950 04:56:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:06.950 04:56:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72389' 00:16:06.950 04:56:30 -- common/autotest_common.sh@955 -- # kill 72389 00:16:06.950 [2024-11-18 04:56:30.399021] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.950 04:56:30 -- common/autotest_common.sh@960 -- # wait 72389 00:16:06.950 [2024-11-18 04:56:30.399140] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:08.334 00:16:08.334 real 0m10.119s 00:16:08.334 user 0m16.696s 00:16:08.334 sys 0m1.545s 00:16:08.334 ************************************ 00:16:08.334 END TEST raid_state_function_test 00:16:08.334 ************************************ 00:16:08.334 04:56:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:08.334 04:56:31 -- common/autotest_common.sh@10 -- # set +x 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:08.334 04:56:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:08.334 04:56:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:08.334 04:56:31 -- common/autotest_common.sh@10 -- # set +x 00:16:08.334 ************************************ 00:16:08.334 START TEST raid_state_function_test_sb 00:16:08.334 ************************************ 00:16:08.334 04:56:31 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 true 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=72729 00:16:08.334 Process raid pid: 72729 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 72729' 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 72729 /var/tmp/spdk-raid.sock 00:16:08.334 04:56:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:08.334 04:56:31 -- common/autotest_common.sh@829 -- # '[' -z 72729 ']' 00:16:08.334 04:56:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:08.334 04:56:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:08.334 04:56:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:08.334 04:56:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.334 04:56:31 -- common/autotest_common.sh@10 -- # set +x 00:16:08.334 [2024-11-18 04:56:31.611251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:08.334 [2024-11-18 04:56:31.611427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.334 [2024-11-18 04:56:31.779828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.594 [2024-11-18 04:56:31.948581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.594 [2024-11-18 04:56:32.108601] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.162 04:56:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.162 04:56:32 -- common/autotest_common.sh@862 -- # return 0 00:16:09.162 04:56:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:09.421 [2024-11-18 04:56:32.721935] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.421 [2024-11-18 04:56:32.722029] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.421 [2024-11-18 04:56:32.722044] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.421 [2024-11-18 04:56:32.722060] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.421 [2024-11-18 04:56:32.722068] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.421 [2024-11-18 04:56:32.722081] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.421 04:56:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.680 04:56:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.680 "name": "Existed_Raid", 00:16:09.680 "uuid": "4e588647-ccac-41b9-a15a-d9006a706031", 00:16:09.680 "strip_size_kb": 64, 00:16:09.680 "state": "configuring", 00:16:09.680 "raid_level": "concat", 00:16:09.680 "superblock": true, 00:16:09.680 "num_base_bdevs": 3, 00:16:09.680 "num_base_bdevs_discovered": 0, 00:16:09.680 "num_base_bdevs_operational": 3, 00:16:09.680 "base_bdevs_list": [ 00:16:09.680 { 00:16:09.680 "name": "BaseBdev1", 00:16:09.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.680 "is_configured": false, 00:16:09.680 "data_offset": 0, 00:16:09.680 "data_size": 0 00:16:09.680 }, 00:16:09.680 { 00:16:09.680 "name": "BaseBdev2", 00:16:09.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.680 "is_configured": false, 00:16:09.680 "data_offset": 0, 00:16:09.680 "data_size": 0 00:16:09.680 }, 00:16:09.680 { 00:16:09.680 "name": "BaseBdev3", 00:16:09.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.680 "is_configured": false, 00:16:09.680 "data_offset": 0, 00:16:09.680 "data_size": 0 00:16:09.680 } 00:16:09.680 ] 00:16:09.680 }' 00:16:09.680 04:56:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.680 04:56:32 -- common/autotest_common.sh@10 -- # set +x 00:16:09.939 04:56:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:10.198 [2024-11-18 04:56:33.489942] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.198 [2024-11-18 04:56:33.490008] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:10.198 04:56:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:10.458 [2024-11-18 04:56:33.746105] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:10.458 [2024-11-18 04:56:33.746176] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:10.458 [2024-11-18 04:56:33.746217] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.458 [2024-11-18 04:56:33.746236] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.458 [2024-11-18 04:56:33.746245] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:10.458 [2024-11-18 04:56:33.746258] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:10.458 04:56:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:10.458 [2024-11-18 04:56:33.978632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.458 BaseBdev1 00:16:10.717 04:56:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:10.717 04:56:33 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:10.717 04:56:33 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:10.717 04:56:33 -- common/autotest_common.sh@899 -- # local i 00:16:10.717 04:56:33 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:10.717 04:56:33 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:10.717 04:56:33 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.976 04:56:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:10.976 [ 00:16:10.976 { 00:16:10.976 "name": "BaseBdev1", 00:16:10.976 "aliases": [ 00:16:10.976 "fd4a6848-c6c6-4a14-a5a2-f13c785a9611" 00:16:10.976 ], 00:16:10.976 "product_name": "Malloc disk", 00:16:10.976 "block_size": 512, 00:16:10.976 "num_blocks": 65536, 00:16:10.976 "uuid": "fd4a6848-c6c6-4a14-a5a2-f13c785a9611", 00:16:10.976 "assigned_rate_limits": { 00:16:10.976 "rw_ios_per_sec": 0, 00:16:10.976 "rw_mbytes_per_sec": 0, 00:16:10.976 "r_mbytes_per_sec": 0, 00:16:10.976 "w_mbytes_per_sec": 0 00:16:10.976 }, 00:16:10.976 "claimed": true, 00:16:10.976 "claim_type": "exclusive_write", 00:16:10.976 "zoned": false, 00:16:10.976 "supported_io_types": { 00:16:10.976 "read": true, 00:16:10.976 "write": true, 00:16:10.976 "unmap": true, 00:16:10.976 "write_zeroes": true, 00:16:10.976 "flush": true, 00:16:10.976 "reset": true, 00:16:10.976 "compare": false, 00:16:10.976 "compare_and_write": false, 00:16:10.976 "abort": true, 00:16:10.976 "nvme_admin": false, 00:16:10.976 "nvme_io": false 00:16:10.976 }, 00:16:10.976 "memory_domains": [ 00:16:10.976 { 00:16:10.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.976 "dma_device_type": 2 00:16:10.976 } 00:16:10.976 ], 00:16:10.976 "driver_specific": {} 00:16:10.976 } 00:16:10.976 ] 00:16:10.976 04:56:34 -- common/autotest_common.sh@905 -- # return 0 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.976 04:56:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.235 04:56:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.235 "name": "Existed_Raid", 00:16:11.235 "uuid": "d1d11ce9-4bfd-4fa7-bdf0-b1b176248de5", 00:16:11.235 "strip_size_kb": 64, 00:16:11.235 "state": "configuring", 00:16:11.235 "raid_level": "concat", 00:16:11.235 "superblock": true, 00:16:11.235 "num_base_bdevs": 3, 00:16:11.235 "num_base_bdevs_discovered": 1, 00:16:11.235 "num_base_bdevs_operational": 3, 00:16:11.235 "base_bdevs_list": [ 00:16:11.235 { 00:16:11.235 "name": "BaseBdev1", 00:16:11.235 "uuid": "fd4a6848-c6c6-4a14-a5a2-f13c785a9611", 00:16:11.235 "is_configured": true, 00:16:11.235 "data_offset": 2048, 00:16:11.235 "data_size": 63488 00:16:11.235 }, 00:16:11.235 { 00:16:11.235 "name": "BaseBdev2", 00:16:11.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.235 "is_configured": false, 00:16:11.235 "data_offset": 0, 00:16:11.235 "data_size": 0 00:16:11.235 }, 00:16:11.235 { 00:16:11.235 "name": "BaseBdev3", 00:16:11.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.235 "is_configured": false, 00:16:11.235 "data_offset": 0, 00:16:11.235 "data_size": 0 00:16:11.235 } 00:16:11.235 ] 00:16:11.235 }' 00:16:11.235 04:56:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.235 04:56:34 -- common/autotest_common.sh@10 -- # set +x 00:16:11.494 04:56:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:11.752 [2024-11-18 04:56:35.159050] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:11.752 [2024-11-18 04:56:35.159125] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:11.752 04:56:35 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:11.752 04:56:35 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:12.011 04:56:35 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:12.269 BaseBdev1 00:16:12.269 04:56:35 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:12.269 04:56:35 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:12.269 04:56:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:12.269 04:56:35 -- common/autotest_common.sh@899 -- # local i 00:16:12.269 04:56:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:12.270 04:56:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:12.270 04:56:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.532 04:56:35 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:12.793 [ 00:16:12.793 { 00:16:12.793 "name": "BaseBdev1", 00:16:12.793 "aliases": [ 00:16:12.793 "0334bb08-ada7-4ec6-89f6-ab2d71736edd" 00:16:12.793 ], 00:16:12.793 "product_name": "Malloc disk", 00:16:12.793 "block_size": 512, 00:16:12.793 "num_blocks": 65536, 00:16:12.793 "uuid": "0334bb08-ada7-4ec6-89f6-ab2d71736edd", 00:16:12.793 "assigned_rate_limits": { 00:16:12.793 "rw_ios_per_sec": 0, 00:16:12.793 "rw_mbytes_per_sec": 0, 00:16:12.793 "r_mbytes_per_sec": 0, 00:16:12.793 "w_mbytes_per_sec": 0 00:16:12.793 }, 00:16:12.793 "claimed": false, 00:16:12.793 "zoned": false, 00:16:12.793 "supported_io_types": { 00:16:12.793 "read": true, 00:16:12.793 "write": true, 00:16:12.793 "unmap": true, 00:16:12.793 "write_zeroes": true, 00:16:12.793 "flush": true, 00:16:12.793 "reset": true, 00:16:12.793 "compare": false, 00:16:12.793 "compare_and_write": false, 00:16:12.793 "abort": true, 00:16:12.793 "nvme_admin": false, 00:16:12.793 "nvme_io": false 00:16:12.793 }, 00:16:12.793 "memory_domains": [ 00:16:12.793 { 00:16:12.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.793 "dma_device_type": 2 00:16:12.793 } 00:16:12.793 ], 00:16:12.793 "driver_specific": {} 00:16:12.793 } 00:16:12.793 ] 00:16:12.793 04:56:36 -- common/autotest_common.sh@905 -- # return 0 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:12.793 [2024-11-18 04:56:36.265247] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.793 [2024-11-18 04:56:36.267277] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.793 [2024-11-18 04:56:36.267358] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.793 [2024-11-18 04:56:36.267372] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:12.793 [2024-11-18 04:56:36.267388] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.793 04:56:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.051 04:56:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.051 "name": "Existed_Raid", 00:16:13.051 "uuid": "df0c6ffe-76f6-45e7-b1e0-46b8f06abf36", 00:16:13.051 "strip_size_kb": 64, 00:16:13.051 "state": "configuring", 00:16:13.051 "raid_level": "concat", 00:16:13.051 "superblock": true, 00:16:13.051 "num_base_bdevs": 3, 00:16:13.051 "num_base_bdevs_discovered": 1, 00:16:13.051 "num_base_bdevs_operational": 3, 00:16:13.051 "base_bdevs_list": [ 00:16:13.051 { 00:16:13.051 "name": "BaseBdev1", 00:16:13.051 "uuid": "0334bb08-ada7-4ec6-89f6-ab2d71736edd", 00:16:13.051 "is_configured": true, 00:16:13.051 "data_offset": 2048, 00:16:13.051 "data_size": 63488 00:16:13.051 }, 00:16:13.051 { 00:16:13.051 "name": "BaseBdev2", 00:16:13.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.051 "is_configured": false, 00:16:13.051 "data_offset": 0, 00:16:13.051 "data_size": 0 00:16:13.051 }, 00:16:13.051 { 00:16:13.051 "name": "BaseBdev3", 00:16:13.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.051 "is_configured": false, 00:16:13.051 "data_offset": 0, 00:16:13.051 "data_size": 0 00:16:13.051 } 00:16:13.051 ] 00:16:13.051 }' 00:16:13.051 04:56:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.051 04:56:36 -- common/autotest_common.sh@10 -- # set +x 00:16:13.620 04:56:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:13.620 [2024-11-18 04:56:37.123311] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.620 BaseBdev2 00:16:13.879 04:56:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:13.879 04:56:37 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:13.879 04:56:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.879 04:56:37 -- common/autotest_common.sh@899 -- # local i 00:16:13.879 04:56:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.879 04:56:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.879 04:56:37 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.879 04:56:37 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:14.138 [ 00:16:14.138 { 00:16:14.138 "name": "BaseBdev2", 00:16:14.138 "aliases": [ 00:16:14.138 "0588f258-e42b-451a-a884-8cda6b44eac5" 00:16:14.138 ], 00:16:14.138 "product_name": "Malloc disk", 00:16:14.138 "block_size": 512, 00:16:14.138 "num_blocks": 65536, 00:16:14.138 "uuid": "0588f258-e42b-451a-a884-8cda6b44eac5", 00:16:14.138 "assigned_rate_limits": { 00:16:14.138 "rw_ios_per_sec": 0, 00:16:14.138 "rw_mbytes_per_sec": 0, 00:16:14.138 "r_mbytes_per_sec": 0, 00:16:14.138 "w_mbytes_per_sec": 0 00:16:14.138 }, 00:16:14.138 "claimed": true, 00:16:14.138 "claim_type": "exclusive_write", 00:16:14.138 "zoned": false, 00:16:14.138 "supported_io_types": { 00:16:14.138 "read": true, 00:16:14.138 "write": true, 00:16:14.138 "unmap": true, 00:16:14.138 "write_zeroes": true, 00:16:14.138 "flush": true, 00:16:14.138 "reset": true, 00:16:14.138 "compare": false, 00:16:14.138 "compare_and_write": false, 00:16:14.138 "abort": true, 00:16:14.138 "nvme_admin": false, 00:16:14.138 "nvme_io": false 00:16:14.138 }, 00:16:14.138 "memory_domains": [ 00:16:14.138 { 00:16:14.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.138 "dma_device_type": 2 00:16:14.138 } 00:16:14.138 ], 00:16:14.138 "driver_specific": {} 00:16:14.138 } 00:16:14.138 ] 00:16:14.138 04:56:37 -- common/autotest_common.sh@905 -- # return 0 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.138 04:56:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.397 04:56:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.397 "name": "Existed_Raid", 00:16:14.397 "uuid": "df0c6ffe-76f6-45e7-b1e0-46b8f06abf36", 00:16:14.397 "strip_size_kb": 64, 00:16:14.397 "state": "configuring", 00:16:14.397 "raid_level": "concat", 00:16:14.397 "superblock": true, 00:16:14.397 "num_base_bdevs": 3, 00:16:14.397 "num_base_bdevs_discovered": 2, 00:16:14.397 "num_base_bdevs_operational": 3, 00:16:14.397 "base_bdevs_list": [ 00:16:14.397 { 00:16:14.397 "name": "BaseBdev1", 00:16:14.397 "uuid": "0334bb08-ada7-4ec6-89f6-ab2d71736edd", 00:16:14.397 "is_configured": true, 00:16:14.397 "data_offset": 2048, 00:16:14.397 "data_size": 63488 00:16:14.397 }, 00:16:14.397 { 00:16:14.397 "name": "BaseBdev2", 00:16:14.397 "uuid": "0588f258-e42b-451a-a884-8cda6b44eac5", 00:16:14.397 "is_configured": true, 00:16:14.397 "data_offset": 2048, 00:16:14.397 "data_size": 63488 00:16:14.397 }, 00:16:14.397 { 00:16:14.397 "name": "BaseBdev3", 00:16:14.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.397 "is_configured": false, 00:16:14.397 "data_offset": 0, 00:16:14.397 "data_size": 0 00:16:14.397 } 00:16:14.397 ] 00:16:14.397 }' 00:16:14.397 04:56:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.397 04:56:37 -- common/autotest_common.sh@10 -- # set +x 00:16:14.656 04:56:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:14.914 [2024-11-18 04:56:38.342380] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.914 [2024-11-18 04:56:38.342655] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:16:14.914 [2024-11-18 04:56:38.342717] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:14.914 [2024-11-18 04:56:38.342851] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:14.914 [2024-11-18 04:56:38.343309] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:16:14.914 [2024-11-18 04:56:38.343335] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:16:14.914 [2024-11-18 04:56:38.343530] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.914 BaseBdev3 00:16:14.914 04:56:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:14.914 04:56:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:14.914 04:56:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:14.914 04:56:38 -- common/autotest_common.sh@899 -- # local i 00:16:14.914 04:56:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:14.914 04:56:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:14.914 04:56:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:15.173 04:56:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:15.433 [ 00:16:15.433 { 00:16:15.433 "name": "BaseBdev3", 00:16:15.433 "aliases": [ 00:16:15.433 "385091ce-2da7-4e2c-a548-e2eaae25cdf9" 00:16:15.433 ], 00:16:15.433 "product_name": "Malloc disk", 00:16:15.433 "block_size": 512, 00:16:15.433 "num_blocks": 65536, 00:16:15.433 "uuid": "385091ce-2da7-4e2c-a548-e2eaae25cdf9", 00:16:15.433 "assigned_rate_limits": { 00:16:15.433 "rw_ios_per_sec": 0, 00:16:15.433 "rw_mbytes_per_sec": 0, 00:16:15.433 "r_mbytes_per_sec": 0, 00:16:15.433 "w_mbytes_per_sec": 0 00:16:15.433 }, 00:16:15.433 "claimed": true, 00:16:15.433 "claim_type": "exclusive_write", 00:16:15.433 "zoned": false, 00:16:15.433 "supported_io_types": { 00:16:15.433 "read": true, 00:16:15.433 "write": true, 00:16:15.433 "unmap": true, 00:16:15.433 "write_zeroes": true, 00:16:15.433 "flush": true, 00:16:15.433 "reset": true, 00:16:15.433 "compare": false, 00:16:15.433 "compare_and_write": false, 00:16:15.433 "abort": true, 00:16:15.433 "nvme_admin": false, 00:16:15.433 "nvme_io": false 00:16:15.433 }, 00:16:15.433 "memory_domains": [ 00:16:15.433 { 00:16:15.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.433 "dma_device_type": 2 00:16:15.433 } 00:16:15.433 ], 00:16:15.433 "driver_specific": {} 00:16:15.433 } 00:16:15.433 ] 00:16:15.433 04:56:38 -- common/autotest_common.sh@905 -- # return 0 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.433 04:56:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.692 04:56:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:15.692 "name": "Existed_Raid", 00:16:15.692 "uuid": "df0c6ffe-76f6-45e7-b1e0-46b8f06abf36", 00:16:15.692 "strip_size_kb": 64, 00:16:15.692 "state": "online", 00:16:15.692 "raid_level": "concat", 00:16:15.692 "superblock": true, 00:16:15.692 "num_base_bdevs": 3, 00:16:15.692 "num_base_bdevs_discovered": 3, 00:16:15.692 "num_base_bdevs_operational": 3, 00:16:15.692 "base_bdevs_list": [ 00:16:15.692 { 00:16:15.692 "name": "BaseBdev1", 00:16:15.692 "uuid": "0334bb08-ada7-4ec6-89f6-ab2d71736edd", 00:16:15.692 "is_configured": true, 00:16:15.692 "data_offset": 2048, 00:16:15.692 "data_size": 63488 00:16:15.692 }, 00:16:15.692 { 00:16:15.692 "name": "BaseBdev2", 00:16:15.692 "uuid": "0588f258-e42b-451a-a884-8cda6b44eac5", 00:16:15.692 "is_configured": true, 00:16:15.692 "data_offset": 2048, 00:16:15.692 "data_size": 63488 00:16:15.692 }, 00:16:15.692 { 00:16:15.692 "name": "BaseBdev3", 00:16:15.692 "uuid": "385091ce-2da7-4e2c-a548-e2eaae25cdf9", 00:16:15.692 "is_configured": true, 00:16:15.692 "data_offset": 2048, 00:16:15.692 "data_size": 63488 00:16:15.692 } 00:16:15.692 ] 00:16:15.692 }' 00:16:15.692 04:56:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:15.693 04:56:38 -- common/autotest_common.sh@10 -- # set +x 00:16:15.950 04:56:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:16.209 [2024-11-18 04:56:39.510982] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.209 [2024-11-18 04:56:39.511027] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.209 [2024-11-18 04:56:39.511090] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.209 04:56:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.467 04:56:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.467 "name": "Existed_Raid", 00:16:16.467 "uuid": "df0c6ffe-76f6-45e7-b1e0-46b8f06abf36", 00:16:16.467 "strip_size_kb": 64, 00:16:16.467 "state": "offline", 00:16:16.467 "raid_level": "concat", 00:16:16.467 "superblock": true, 00:16:16.467 "num_base_bdevs": 3, 00:16:16.467 "num_base_bdevs_discovered": 2, 00:16:16.467 "num_base_bdevs_operational": 2, 00:16:16.467 "base_bdevs_list": [ 00:16:16.467 { 00:16:16.467 "name": null, 00:16:16.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.467 "is_configured": false, 00:16:16.467 "data_offset": 2048, 00:16:16.467 "data_size": 63488 00:16:16.467 }, 00:16:16.467 { 00:16:16.467 "name": "BaseBdev2", 00:16:16.467 "uuid": "0588f258-e42b-451a-a884-8cda6b44eac5", 00:16:16.467 "is_configured": true, 00:16:16.467 "data_offset": 2048, 00:16:16.467 "data_size": 63488 00:16:16.467 }, 00:16:16.467 { 00:16:16.467 "name": "BaseBdev3", 00:16:16.467 "uuid": "385091ce-2da7-4e2c-a548-e2eaae25cdf9", 00:16:16.467 "is_configured": true, 00:16:16.467 "data_offset": 2048, 00:16:16.467 "data_size": 63488 00:16:16.468 } 00:16:16.468 ] 00:16:16.468 }' 00:16:16.468 04:56:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.468 04:56:39 -- common/autotest_common.sh@10 -- # set +x 00:16:16.726 04:56:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:16.726 04:56:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:16.726 04:56:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.726 04:56:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:16.986 04:56:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:16.986 04:56:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.986 04:56:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:17.245 [2024-11-18 04:56:40.708368] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:17.504 04:56:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:17.504 04:56:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:17.504 04:56:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.504 04:56:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:17.763 04:56:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:17.763 04:56:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.763 04:56:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:18.023 [2024-11-18 04:56:41.306401] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:18.023 [2024-11-18 04:56:41.306488] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:16:18.023 04:56:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:18.023 04:56:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:18.023 04:56:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.023 04:56:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:18.281 04:56:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:18.281 04:56:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:18.281 04:56:41 -- bdev/bdev_raid.sh@287 -- # killprocess 72729 00:16:18.281 04:56:41 -- common/autotest_common.sh@936 -- # '[' -z 72729 ']' 00:16:18.281 04:56:41 -- common/autotest_common.sh@940 -- # kill -0 72729 00:16:18.281 04:56:41 -- common/autotest_common.sh@941 -- # uname 00:16:18.281 04:56:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:18.281 04:56:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72729 00:16:18.281 killing process with pid 72729 00:16:18.281 04:56:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:18.281 04:56:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:18.281 04:56:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72729' 00:16:18.281 04:56:41 -- common/autotest_common.sh@955 -- # kill 72729 00:16:18.281 04:56:41 -- common/autotest_common.sh@960 -- # wait 72729 00:16:18.281 [2024-11-18 04:56:41.653818] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.281 [2024-11-18 04:56:41.653937] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:19.216 00:16:19.216 real 0m11.136s 00:16:19.216 user 0m18.599s 00:16:19.216 sys 0m1.601s 00:16:19.216 04:56:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:19.216 04:56:42 -- common/autotest_common.sh@10 -- # set +x 00:16:19.216 ************************************ 00:16:19.216 END TEST raid_state_function_test_sb 00:16:19.216 ************************************ 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:19.216 04:56:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:19.216 04:56:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.216 04:56:42 -- common/autotest_common.sh@10 -- # set +x 00:16:19.216 ************************************ 00:16:19.216 START TEST raid_superblock_test 00:16:19.216 ************************************ 00:16:19.216 04:56:42 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 3 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:19.216 04:56:42 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:19.474 04:56:42 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:19.474 04:56:42 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:19.474 04:56:42 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:19.474 04:56:42 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:19.474 04:56:42 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:19.475 04:56:42 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:19.475 04:56:42 -- bdev/bdev_raid.sh@357 -- # raid_pid=73083 00:16:19.475 04:56:42 -- bdev/bdev_raid.sh@358 -- # waitforlisten 73083 /var/tmp/spdk-raid.sock 00:16:19.475 04:56:42 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:19.475 04:56:42 -- common/autotest_common.sh@829 -- # '[' -z 73083 ']' 00:16:19.475 04:56:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:19.475 04:56:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:19.475 04:56:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:19.475 04:56:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.475 04:56:42 -- common/autotest_common.sh@10 -- # set +x 00:16:19.475 [2024-11-18 04:56:42.804035] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:19.475 [2024-11-18 04:56:42.804238] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73083 ] 00:16:19.475 [2024-11-18 04:56:42.976042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.733 [2024-11-18 04:56:43.144299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.992 [2024-11-18 04:56:43.302802] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.250 04:56:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.250 04:56:43 -- common/autotest_common.sh@862 -- # return 0 00:16:20.250 04:56:43 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:20.250 04:56:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:20.250 04:56:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:20.250 04:56:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:20.250 04:56:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:20.250 04:56:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.250 04:56:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.250 04:56:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.250 04:56:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:20.508 malloc1 00:16:20.508 04:56:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.767 [2024-11-18 04:56:44.174153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.767 [2024-11-18 04:56:44.174248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.767 [2024-11-18 04:56:44.174288] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:16:20.767 [2024-11-18 04:56:44.174302] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.767 [2024-11-18 04:56:44.177191] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.767 [2024-11-18 04:56:44.177257] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.767 pt1 00:16:20.767 04:56:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:20.767 04:56:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:20.767 04:56:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:20.767 04:56:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:20.767 04:56:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:20.767 04:56:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.767 04:56:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.767 04:56:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.767 04:56:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:21.026 malloc2 00:16:21.026 04:56:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:21.284 [2024-11-18 04:56:44.654593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:21.284 [2024-11-18 04:56:44.654693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.284 [2024-11-18 04:56:44.654726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:16:21.284 [2024-11-18 04:56:44.654741] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.284 [2024-11-18 04:56:44.657053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.285 [2024-11-18 04:56:44.657105] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:21.285 pt2 00:16:21.285 04:56:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:21.285 04:56:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:21.285 04:56:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:21.285 04:56:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:21.285 04:56:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:21.285 04:56:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:21.285 04:56:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:21.285 04:56:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:21.285 04:56:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:21.543 malloc3 00:16:21.544 04:56:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:21.802 [2024-11-18 04:56:45.117122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:21.802 [2024-11-18 04:56:45.117215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.802 [2024-11-18 04:56:45.117252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:16:21.802 [2024-11-18 04:56:45.117266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.802 [2024-11-18 04:56:45.119784] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.802 [2024-11-18 04:56:45.119836] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:21.802 pt3 00:16:21.803 04:56:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:21.803 04:56:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:21.803 04:56:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:21.803 [2024-11-18 04:56:45.325249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:22.061 [2024-11-18 04:56:45.327549] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.061 [2024-11-18 04:56:45.327680] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:22.061 [2024-11-18 04:56:45.327953] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:16:22.061 [2024-11-18 04:56:45.327974] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:22.061 [2024-11-18 04:56:45.328119] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:22.061 [2024-11-18 04:56:45.328596] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:16:22.061 [2024-11-18 04:56:45.328632] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:16:22.061 [2024-11-18 04:56:45.328800] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.061 04:56:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.061 "name": "raid_bdev1", 00:16:22.061 "uuid": "f86e92dd-2af9-4c3c-b9b7-1a65ab86ad8e", 00:16:22.061 "strip_size_kb": 64, 00:16:22.061 "state": "online", 00:16:22.061 "raid_level": "concat", 00:16:22.061 "superblock": true, 00:16:22.061 "num_base_bdevs": 3, 00:16:22.062 "num_base_bdevs_discovered": 3, 00:16:22.062 "num_base_bdevs_operational": 3, 00:16:22.062 "base_bdevs_list": [ 00:16:22.062 { 00:16:22.062 "name": "pt1", 00:16:22.062 "uuid": "5b634892-906f-54e0-b6af-45fc7cb7eadd", 00:16:22.062 "is_configured": true, 00:16:22.062 "data_offset": 2048, 00:16:22.062 "data_size": 63488 00:16:22.062 }, 00:16:22.062 { 00:16:22.062 "name": "pt2", 00:16:22.062 "uuid": "d6697ece-9486-5fd0-8a20-4dc3777cd0e2", 00:16:22.062 "is_configured": true, 00:16:22.062 "data_offset": 2048, 00:16:22.062 "data_size": 63488 00:16:22.062 }, 00:16:22.062 { 00:16:22.062 "name": "pt3", 00:16:22.062 "uuid": "9c5d8ee8-66b4-5f0e-9c3d-49eb6b49d036", 00:16:22.062 "is_configured": true, 00:16:22.062 "data_offset": 2048, 00:16:22.062 "data_size": 63488 00:16:22.062 } 00:16:22.062 ] 00:16:22.062 }' 00:16:22.062 04:56:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.062 04:56:45 -- common/autotest_common.sh@10 -- # set +x 00:16:22.630 04:56:45 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:22.630 04:56:45 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:22.630 [2024-11-18 04:56:46.133543] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.888 04:56:46 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f86e92dd-2af9-4c3c-b9b7-1a65ab86ad8e 00:16:22.888 04:56:46 -- bdev/bdev_raid.sh@380 -- # '[' -z f86e92dd-2af9-4c3c-b9b7-1a65ab86ad8e ']' 00:16:22.888 04:56:46 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:22.888 [2024-11-18 04:56:46.389381] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.888 [2024-11-18 04:56:46.389417] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.888 [2024-11-18 04:56:46.389536] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.888 [2024-11-18 04:56:46.389607] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.888 [2024-11-18 04:56:46.389625] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:16:22.888 04:56:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.888 04:56:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:23.147 04:56:46 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:23.147 04:56:46 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:23.147 04:56:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:23.147 04:56:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:23.406 04:56:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:23.406 04:56:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:23.666 04:56:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:23.666 04:56:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:23.925 04:56:47 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:23.925 04:56:47 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:24.184 04:56:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:24.184 04:56:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:24.184 04:56:47 -- common/autotest_common.sh@650 -- # local es=0 00:16:24.184 04:56:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:24.184 04:56:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.184 04:56:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.184 04:56:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.184 04:56:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.184 04:56:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.184 04:56:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.184 04:56:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.184 04:56:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:24.184 04:56:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:24.443 [2024-11-18 04:56:47.745706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:24.443 [2024-11-18 04:56:47.747837] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:24.443 [2024-11-18 04:56:47.747910] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:24.443 [2024-11-18 04:56:47.747972] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:24.443 [2024-11-18 04:56:47.748031] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:24.443 [2024-11-18 04:56:47.748063] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:24.443 [2024-11-18 04:56:47.748084] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.443 [2024-11-18 04:56:47.748097] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:16:24.443 request: 00:16:24.443 { 00:16:24.443 "name": "raid_bdev1", 00:16:24.443 "raid_level": "concat", 00:16:24.443 "base_bdevs": [ 00:16:24.443 "malloc1", 00:16:24.443 "malloc2", 00:16:24.443 "malloc3" 00:16:24.443 ], 00:16:24.443 "superblock": false, 00:16:24.443 "strip_size_kb": 64, 00:16:24.443 "method": "bdev_raid_create", 00:16:24.443 "req_id": 1 00:16:24.443 } 00:16:24.443 Got JSON-RPC error response 00:16:24.443 response: 00:16:24.443 { 00:16:24.443 "code": -17, 00:16:24.443 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:24.443 } 00:16:24.443 04:56:47 -- common/autotest_common.sh@653 -- # es=1 00:16:24.443 04:56:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.443 04:56:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.443 04:56:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.443 04:56:47 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:24.443 04:56:47 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.702 04:56:47 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:24.702 04:56:47 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:24.702 04:56:47 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:24.702 [2024-11-18 04:56:48.161801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:24.702 [2024-11-18 04:56:48.161885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.702 [2024-11-18 04:56:48.161912] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:16:24.702 [2024-11-18 04:56:48.161927] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.702 [2024-11-18 04:56:48.164490] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.702 [2024-11-18 04:56:48.164531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:24.702 [2024-11-18 04:56:48.164644] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:24.702 [2024-11-18 04:56:48.164740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:24.702 pt1 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.702 04:56:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.961 04:56:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.961 "name": "raid_bdev1", 00:16:24.961 "uuid": "f86e92dd-2af9-4c3c-b9b7-1a65ab86ad8e", 00:16:24.961 "strip_size_kb": 64, 00:16:24.961 "state": "configuring", 00:16:24.961 "raid_level": "concat", 00:16:24.961 "superblock": true, 00:16:24.961 "num_base_bdevs": 3, 00:16:24.961 "num_base_bdevs_discovered": 1, 00:16:24.961 "num_base_bdevs_operational": 3, 00:16:24.961 "base_bdevs_list": [ 00:16:24.961 { 00:16:24.961 "name": "pt1", 00:16:24.961 "uuid": "5b634892-906f-54e0-b6af-45fc7cb7eadd", 00:16:24.961 "is_configured": true, 00:16:24.961 "data_offset": 2048, 00:16:24.961 "data_size": 63488 00:16:24.961 }, 00:16:24.961 { 00:16:24.961 "name": null, 00:16:24.961 "uuid": "d6697ece-9486-5fd0-8a20-4dc3777cd0e2", 00:16:24.961 "is_configured": false, 00:16:24.961 "data_offset": 2048, 00:16:24.961 "data_size": 63488 00:16:24.961 }, 00:16:24.961 { 00:16:24.961 "name": null, 00:16:24.961 "uuid": "9c5d8ee8-66b4-5f0e-9c3d-49eb6b49d036", 00:16:24.961 "is_configured": false, 00:16:24.961 "data_offset": 2048, 00:16:24.961 "data_size": 63488 00:16:24.961 } 00:16:24.961 ] 00:16:24.961 }' 00:16:24.961 04:56:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.961 04:56:48 -- common/autotest_common.sh@10 -- # set +x 00:16:25.219 04:56:48 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:25.219 04:56:48 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:25.479 [2024-11-18 04:56:48.857983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:25.479 [2024-11-18 04:56:48.858067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.479 [2024-11-18 04:56:48.858096] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:16:25.479 [2024-11-18 04:56:48.858112] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.479 [2024-11-18 04:56:48.858592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.479 [2024-11-18 04:56:48.858635] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:25.479 [2024-11-18 04:56:48.858726] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:25.479 [2024-11-18 04:56:48.858756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:25.479 pt2 00:16:25.479 04:56:48 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:25.738 [2024-11-18 04:56:49.058039] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.738 04:56:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.996 04:56:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.996 "name": "raid_bdev1", 00:16:25.996 "uuid": "f86e92dd-2af9-4c3c-b9b7-1a65ab86ad8e", 00:16:25.996 "strip_size_kb": 64, 00:16:25.996 "state": "configuring", 00:16:25.996 "raid_level": "concat", 00:16:25.996 "superblock": true, 00:16:25.996 "num_base_bdevs": 3, 00:16:25.996 "num_base_bdevs_discovered": 1, 00:16:25.996 "num_base_bdevs_operational": 3, 00:16:25.996 "base_bdevs_list": [ 00:16:25.996 { 00:16:25.996 "name": "pt1", 00:16:25.996 "uuid": "5b634892-906f-54e0-b6af-45fc7cb7eadd", 00:16:25.996 "is_configured": true, 00:16:25.996 "data_offset": 2048, 00:16:25.996 "data_size": 63488 00:16:25.996 }, 00:16:25.996 { 00:16:25.996 "name": null, 00:16:25.996 "uuid": "d6697ece-9486-5fd0-8a20-4dc3777cd0e2", 00:16:25.996 "is_configured": false, 00:16:25.996 "data_offset": 2048, 00:16:25.996 "data_size": 63488 00:16:25.996 }, 00:16:25.996 { 00:16:25.996 "name": null, 00:16:25.996 "uuid": "9c5d8ee8-66b4-5f0e-9c3d-49eb6b49d036", 00:16:25.996 "is_configured": false, 00:16:25.996 "data_offset": 2048, 00:16:25.996 "data_size": 63488 00:16:25.996 } 00:16:25.996 ] 00:16:25.996 }' 00:16:25.996 04:56:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.996 04:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:26.254 04:56:49 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:26.254 04:56:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:26.254 04:56:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.514 [2024-11-18 04:56:49.798213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.514 [2024-11-18 04:56:49.798305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.514 [2024-11-18 04:56:49.798336] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:16:26.514 [2024-11-18 04:56:49.798350] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.514 [2024-11-18 04:56:49.798824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.514 [2024-11-18 04:56:49.798847] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.514 [2024-11-18 04:56:49.798969] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:26.514 [2024-11-18 04:56:49.798997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.514 pt2 00:16:26.514 04:56:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:26.514 04:56:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:26.514 04:56:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:26.514 [2024-11-18 04:56:49.994333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:26.514 [2024-11-18 04:56:49.994410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.514 [2024-11-18 04:56:49.994439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:16:26.514 [2024-11-18 04:56:49.994452] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.514 [2024-11-18 04:56:49.994976] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.514 [2024-11-18 04:56:49.995006] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:26.514 [2024-11-18 04:56:49.995126] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:26.514 [2024-11-18 04:56:49.995154] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:26.514 [2024-11-18 04:56:49.995390] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:16:26.514 [2024-11-18 04:56:49.995406] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:26.514 [2024-11-18 04:56:49.995507] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:26.514 [2024-11-18 04:56:49.995837] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:16:26.514 [2024-11-18 04:56:49.995855] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:16:26.514 [2024-11-18 04:56:49.995983] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.514 pt3 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.514 04:56:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.774 04:56:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.774 "name": "raid_bdev1", 00:16:26.774 "uuid": "f86e92dd-2af9-4c3c-b9b7-1a65ab86ad8e", 00:16:26.774 "strip_size_kb": 64, 00:16:26.774 "state": "online", 00:16:26.774 "raid_level": "concat", 00:16:26.774 "superblock": true, 00:16:26.774 "num_base_bdevs": 3, 00:16:26.774 "num_base_bdevs_discovered": 3, 00:16:26.774 "num_base_bdevs_operational": 3, 00:16:26.774 "base_bdevs_list": [ 00:16:26.774 { 00:16:26.774 "name": "pt1", 00:16:26.774 "uuid": "5b634892-906f-54e0-b6af-45fc7cb7eadd", 00:16:26.774 "is_configured": true, 00:16:26.774 "data_offset": 2048, 00:16:26.774 "data_size": 63488 00:16:26.774 }, 00:16:26.774 { 00:16:26.774 "name": "pt2", 00:16:26.774 "uuid": "d6697ece-9486-5fd0-8a20-4dc3777cd0e2", 00:16:26.774 "is_configured": true, 00:16:26.774 "data_offset": 2048, 00:16:26.774 "data_size": 63488 00:16:26.774 }, 00:16:26.774 { 00:16:26.774 "name": "pt3", 00:16:26.774 "uuid": "9c5d8ee8-66b4-5f0e-9c3d-49eb6b49d036", 00:16:26.774 "is_configured": true, 00:16:26.774 "data_offset": 2048, 00:16:26.774 "data_size": 63488 00:16:26.774 } 00:16:26.774 ] 00:16:26.774 }' 00:16:26.774 04:56:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.774 04:56:50 -- common/autotest_common.sh@10 -- # set +x 00:16:27.032 04:56:50 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:27.032 04:56:50 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:27.291 [2024-11-18 04:56:50.726822] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.291 04:56:50 -- bdev/bdev_raid.sh@430 -- # '[' f86e92dd-2af9-4c3c-b9b7-1a65ab86ad8e '!=' f86e92dd-2af9-4c3c-b9b7-1a65ab86ad8e ']' 00:16:27.291 04:56:50 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:27.291 04:56:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:27.291 04:56:50 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:27.291 04:56:50 -- bdev/bdev_raid.sh@511 -- # killprocess 73083 00:16:27.291 04:56:50 -- common/autotest_common.sh@936 -- # '[' -z 73083 ']' 00:16:27.291 04:56:50 -- common/autotest_common.sh@940 -- # kill -0 73083 00:16:27.291 04:56:50 -- common/autotest_common.sh@941 -- # uname 00:16:27.291 04:56:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:27.291 04:56:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73083 00:16:27.291 04:56:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:27.291 killing process with pid 73083 00:16:27.291 04:56:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:27.291 04:56:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73083' 00:16:27.291 04:56:50 -- common/autotest_common.sh@955 -- # kill 73083 00:16:27.291 [2024-11-18 04:56:50.777492] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.291 04:56:50 -- common/autotest_common.sh@960 -- # wait 73083 00:16:27.291 [2024-11-18 04:56:50.777592] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.291 [2024-11-18 04:56:50.777659] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.291 [2024-11-18 04:56:50.777678] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:16:27.550 [2024-11-18 04:56:51.007087] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:28.928 00:16:28.928 real 0m9.301s 00:16:28.928 user 0m15.240s 00:16:28.928 sys 0m1.326s 00:16:28.928 04:56:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:28.928 04:56:52 -- common/autotest_common.sh@10 -- # set +x 00:16:28.928 ************************************ 00:16:28.928 END TEST raid_superblock_test 00:16:28.928 ************************************ 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:28.928 04:56:52 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:28.928 04:56:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.928 04:56:52 -- common/autotest_common.sh@10 -- # set +x 00:16:28.928 ************************************ 00:16:28.928 START TEST raid_state_function_test 00:16:28.928 ************************************ 00:16:28.928 04:56:52 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 false 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:28.928 Process raid pid: 73359 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=73359 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 73359' 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 73359 /var/tmp/spdk-raid.sock 00:16:28.928 04:56:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:28.928 04:56:52 -- common/autotest_common.sh@829 -- # '[' -z 73359 ']' 00:16:28.928 04:56:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:28.928 04:56:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:28.928 04:56:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:28.928 04:56:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.928 04:56:52 -- common/autotest_common.sh@10 -- # set +x 00:16:28.928 [2024-11-18 04:56:52.160579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:28.928 [2024-11-18 04:56:52.160929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.928 [2024-11-18 04:56:52.328202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.188 [2024-11-18 04:56:52.494732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.188 [2024-11-18 04:56:52.654467] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.755 04:56:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.755 04:56:53 -- common/autotest_common.sh@862 -- # return 0 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:29.755 [2024-11-18 04:56:53.233314] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.755 [2024-11-18 04:56:53.233545] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.755 [2024-11-18 04:56:53.233573] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.755 [2024-11-18 04:56:53.233590] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.755 [2024-11-18 04:56:53.233599] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.755 [2024-11-18 04:56:53.233611] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.755 04:56:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.014 04:56:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:30.014 "name": "Existed_Raid", 00:16:30.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.014 "strip_size_kb": 0, 00:16:30.014 "state": "configuring", 00:16:30.014 "raid_level": "raid1", 00:16:30.014 "superblock": false, 00:16:30.014 "num_base_bdevs": 3, 00:16:30.014 "num_base_bdevs_discovered": 0, 00:16:30.014 "num_base_bdevs_operational": 3, 00:16:30.014 "base_bdevs_list": [ 00:16:30.014 { 00:16:30.014 "name": "BaseBdev1", 00:16:30.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.014 "is_configured": false, 00:16:30.014 "data_offset": 0, 00:16:30.014 "data_size": 0 00:16:30.014 }, 00:16:30.014 { 00:16:30.014 "name": "BaseBdev2", 00:16:30.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.014 "is_configured": false, 00:16:30.014 "data_offset": 0, 00:16:30.014 "data_size": 0 00:16:30.014 }, 00:16:30.014 { 00:16:30.014 "name": "BaseBdev3", 00:16:30.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.014 "is_configured": false, 00:16:30.014 "data_offset": 0, 00:16:30.014 "data_size": 0 00:16:30.014 } 00:16:30.014 ] 00:16:30.014 }' 00:16:30.014 04:56:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:30.014 04:56:53 -- common/autotest_common.sh@10 -- # set +x 00:16:30.582 04:56:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:30.582 [2024-11-18 04:56:54.005477] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.582 [2024-11-18 04:56:54.005531] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:30.582 04:56:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:30.840 [2024-11-18 04:56:54.221545] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.840 [2024-11-18 04:56:54.221642] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.840 [2024-11-18 04:56:54.221671] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.840 [2024-11-18 04:56:54.221688] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.840 [2024-11-18 04:56:54.221697] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.840 [2024-11-18 04:56:54.221709] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.840 04:56:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:31.099 [2024-11-18 04:56:54.455452] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.099 BaseBdev1 00:16:31.099 04:56:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:31.099 04:56:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:31.099 04:56:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:31.099 04:56:54 -- common/autotest_common.sh@899 -- # local i 00:16:31.099 04:56:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:31.099 04:56:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:31.099 04:56:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:31.358 04:56:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:31.358 [ 00:16:31.358 { 00:16:31.358 "name": "BaseBdev1", 00:16:31.358 "aliases": [ 00:16:31.358 "f5ae92ef-15f6-473f-b80c-64fc85e3e7bf" 00:16:31.358 ], 00:16:31.358 "product_name": "Malloc disk", 00:16:31.358 "block_size": 512, 00:16:31.358 "num_blocks": 65536, 00:16:31.358 "uuid": "f5ae92ef-15f6-473f-b80c-64fc85e3e7bf", 00:16:31.358 "assigned_rate_limits": { 00:16:31.358 "rw_ios_per_sec": 0, 00:16:31.358 "rw_mbytes_per_sec": 0, 00:16:31.358 "r_mbytes_per_sec": 0, 00:16:31.358 "w_mbytes_per_sec": 0 00:16:31.358 }, 00:16:31.358 "claimed": true, 00:16:31.358 "claim_type": "exclusive_write", 00:16:31.358 "zoned": false, 00:16:31.358 "supported_io_types": { 00:16:31.358 "read": true, 00:16:31.358 "write": true, 00:16:31.358 "unmap": true, 00:16:31.358 "write_zeroes": true, 00:16:31.358 "flush": true, 00:16:31.358 "reset": true, 00:16:31.358 "compare": false, 00:16:31.358 "compare_and_write": false, 00:16:31.358 "abort": true, 00:16:31.358 "nvme_admin": false, 00:16:31.358 "nvme_io": false 00:16:31.358 }, 00:16:31.358 "memory_domains": [ 00:16:31.358 { 00:16:31.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.358 "dma_device_type": 2 00:16:31.358 } 00:16:31.358 ], 00:16:31.358 "driver_specific": {} 00:16:31.358 } 00:16:31.358 ] 00:16:31.358 04:56:54 -- common/autotest_common.sh@905 -- # return 0 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.358 04:56:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.617 04:56:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.617 "name": "Existed_Raid", 00:16:31.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.617 "strip_size_kb": 0, 00:16:31.617 "state": "configuring", 00:16:31.617 "raid_level": "raid1", 00:16:31.617 "superblock": false, 00:16:31.617 "num_base_bdevs": 3, 00:16:31.617 "num_base_bdevs_discovered": 1, 00:16:31.617 "num_base_bdevs_operational": 3, 00:16:31.617 "base_bdevs_list": [ 00:16:31.617 { 00:16:31.617 "name": "BaseBdev1", 00:16:31.617 "uuid": "f5ae92ef-15f6-473f-b80c-64fc85e3e7bf", 00:16:31.617 "is_configured": true, 00:16:31.617 "data_offset": 0, 00:16:31.617 "data_size": 65536 00:16:31.617 }, 00:16:31.617 { 00:16:31.617 "name": "BaseBdev2", 00:16:31.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.617 "is_configured": false, 00:16:31.617 "data_offset": 0, 00:16:31.617 "data_size": 0 00:16:31.617 }, 00:16:31.617 { 00:16:31.617 "name": "BaseBdev3", 00:16:31.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.617 "is_configured": false, 00:16:31.617 "data_offset": 0, 00:16:31.617 "data_size": 0 00:16:31.617 } 00:16:31.617 ] 00:16:31.617 }' 00:16:31.617 04:56:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.617 04:56:55 -- common/autotest_common.sh@10 -- # set +x 00:16:32.185 04:56:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:32.185 [2024-11-18 04:56:55.675888] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.185 [2024-11-18 04:56:55.675957] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:32.185 04:56:55 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:32.185 04:56:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:32.443 [2024-11-18 04:56:55.880005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.443 [2024-11-18 04:56:55.881972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.443 [2024-11-18 04:56:55.882037] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.443 [2024-11-18 04:56:55.882052] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.443 [2024-11-18 04:56:55.882066] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.443 04:56:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.724 04:56:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.724 "name": "Existed_Raid", 00:16:32.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.724 "strip_size_kb": 0, 00:16:32.725 "state": "configuring", 00:16:32.725 "raid_level": "raid1", 00:16:32.725 "superblock": false, 00:16:32.725 "num_base_bdevs": 3, 00:16:32.725 "num_base_bdevs_discovered": 1, 00:16:32.725 "num_base_bdevs_operational": 3, 00:16:32.725 "base_bdevs_list": [ 00:16:32.725 { 00:16:32.725 "name": "BaseBdev1", 00:16:32.725 "uuid": "f5ae92ef-15f6-473f-b80c-64fc85e3e7bf", 00:16:32.725 "is_configured": true, 00:16:32.725 "data_offset": 0, 00:16:32.725 "data_size": 65536 00:16:32.725 }, 00:16:32.725 { 00:16:32.725 "name": "BaseBdev2", 00:16:32.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.725 "is_configured": false, 00:16:32.725 "data_offset": 0, 00:16:32.725 "data_size": 0 00:16:32.725 }, 00:16:32.725 { 00:16:32.725 "name": "BaseBdev3", 00:16:32.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.725 "is_configured": false, 00:16:32.725 "data_offset": 0, 00:16:32.725 "data_size": 0 00:16:32.725 } 00:16:32.725 ] 00:16:32.725 }' 00:16:32.725 04:56:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.725 04:56:56 -- common/autotest_common.sh@10 -- # set +x 00:16:33.003 04:56:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:33.262 [2024-11-18 04:56:56.651380] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.262 BaseBdev2 00:16:33.262 04:56:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:33.262 04:56:56 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:33.262 04:56:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:33.262 04:56:56 -- common/autotest_common.sh@899 -- # local i 00:16:33.262 04:56:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:33.262 04:56:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:33.262 04:56:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.521 04:56:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:33.778 [ 00:16:33.778 { 00:16:33.778 "name": "BaseBdev2", 00:16:33.778 "aliases": [ 00:16:33.778 "aa65d2ef-9645-4e90-b2a4-e7e57f56ea59" 00:16:33.778 ], 00:16:33.778 "product_name": "Malloc disk", 00:16:33.778 "block_size": 512, 00:16:33.778 "num_blocks": 65536, 00:16:33.778 "uuid": "aa65d2ef-9645-4e90-b2a4-e7e57f56ea59", 00:16:33.778 "assigned_rate_limits": { 00:16:33.778 "rw_ios_per_sec": 0, 00:16:33.778 "rw_mbytes_per_sec": 0, 00:16:33.778 "r_mbytes_per_sec": 0, 00:16:33.778 "w_mbytes_per_sec": 0 00:16:33.778 }, 00:16:33.778 "claimed": true, 00:16:33.778 "claim_type": "exclusive_write", 00:16:33.778 "zoned": false, 00:16:33.778 "supported_io_types": { 00:16:33.778 "read": true, 00:16:33.778 "write": true, 00:16:33.778 "unmap": true, 00:16:33.778 "write_zeroes": true, 00:16:33.778 "flush": true, 00:16:33.778 "reset": true, 00:16:33.778 "compare": false, 00:16:33.778 "compare_and_write": false, 00:16:33.778 "abort": true, 00:16:33.778 "nvme_admin": false, 00:16:33.778 "nvme_io": false 00:16:33.778 }, 00:16:33.778 "memory_domains": [ 00:16:33.778 { 00:16:33.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.778 "dma_device_type": 2 00:16:33.778 } 00:16:33.778 ], 00:16:33.778 "driver_specific": {} 00:16:33.778 } 00:16:33.778 ] 00:16:33.778 04:56:57 -- common/autotest_common.sh@905 -- # return 0 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.778 04:56:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.036 04:56:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.036 "name": "Existed_Raid", 00:16:34.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.036 "strip_size_kb": 0, 00:16:34.036 "state": "configuring", 00:16:34.036 "raid_level": "raid1", 00:16:34.036 "superblock": false, 00:16:34.036 "num_base_bdevs": 3, 00:16:34.036 "num_base_bdevs_discovered": 2, 00:16:34.036 "num_base_bdevs_operational": 3, 00:16:34.036 "base_bdevs_list": [ 00:16:34.036 { 00:16:34.036 "name": "BaseBdev1", 00:16:34.036 "uuid": "f5ae92ef-15f6-473f-b80c-64fc85e3e7bf", 00:16:34.036 "is_configured": true, 00:16:34.036 "data_offset": 0, 00:16:34.036 "data_size": 65536 00:16:34.036 }, 00:16:34.036 { 00:16:34.036 "name": "BaseBdev2", 00:16:34.036 "uuid": "aa65d2ef-9645-4e90-b2a4-e7e57f56ea59", 00:16:34.036 "is_configured": true, 00:16:34.036 "data_offset": 0, 00:16:34.036 "data_size": 65536 00:16:34.036 }, 00:16:34.036 { 00:16:34.036 "name": "BaseBdev3", 00:16:34.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.036 "is_configured": false, 00:16:34.036 "data_offset": 0, 00:16:34.036 "data_size": 0 00:16:34.036 } 00:16:34.036 ] 00:16:34.036 }' 00:16:34.036 04:56:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.036 04:56:57 -- common/autotest_common.sh@10 -- # set +x 00:16:34.295 04:56:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:34.554 [2024-11-18 04:56:57.914709] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.554 [2024-11-18 04:56:57.914978] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:16:34.554 [2024-11-18 04:56:57.915038] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:34.554 [2024-11-18 04:56:57.915337] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:34.554 [2024-11-18 04:56:57.915860] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:16:34.554 [2024-11-18 04:56:57.916057] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:16:34.554 [2024-11-18 04:56:57.916490] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.554 BaseBdev3 00:16:34.554 04:56:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:34.554 04:56:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:34.554 04:56:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:34.554 04:56:57 -- common/autotest_common.sh@899 -- # local i 00:16:34.554 04:56:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:34.554 04:56:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:34.554 04:56:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.812 04:56:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:35.071 [ 00:16:35.071 { 00:16:35.071 "name": "BaseBdev3", 00:16:35.071 "aliases": [ 00:16:35.071 "f942c073-fa3c-4ec6-add4-e80e0e3c5b12" 00:16:35.071 ], 00:16:35.071 "product_name": "Malloc disk", 00:16:35.071 "block_size": 512, 00:16:35.071 "num_blocks": 65536, 00:16:35.071 "uuid": "f942c073-fa3c-4ec6-add4-e80e0e3c5b12", 00:16:35.071 "assigned_rate_limits": { 00:16:35.071 "rw_ios_per_sec": 0, 00:16:35.071 "rw_mbytes_per_sec": 0, 00:16:35.071 "r_mbytes_per_sec": 0, 00:16:35.071 "w_mbytes_per_sec": 0 00:16:35.071 }, 00:16:35.071 "claimed": true, 00:16:35.071 "claim_type": "exclusive_write", 00:16:35.071 "zoned": false, 00:16:35.071 "supported_io_types": { 00:16:35.071 "read": true, 00:16:35.071 "write": true, 00:16:35.071 "unmap": true, 00:16:35.071 "write_zeroes": true, 00:16:35.071 "flush": true, 00:16:35.071 "reset": true, 00:16:35.071 "compare": false, 00:16:35.071 "compare_and_write": false, 00:16:35.071 "abort": true, 00:16:35.071 "nvme_admin": false, 00:16:35.071 "nvme_io": false 00:16:35.071 }, 00:16:35.071 "memory_domains": [ 00:16:35.071 { 00:16:35.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.071 "dma_device_type": 2 00:16:35.071 } 00:16:35.071 ], 00:16:35.071 "driver_specific": {} 00:16:35.071 } 00:16:35.071 ] 00:16:35.071 04:56:58 -- common/autotest_common.sh@905 -- # return 0 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.071 "name": "Existed_Raid", 00:16:35.071 "uuid": "5778b2aa-c11f-495e-82f7-334f9ff66648", 00:16:35.071 "strip_size_kb": 0, 00:16:35.071 "state": "online", 00:16:35.071 "raid_level": "raid1", 00:16:35.071 "superblock": false, 00:16:35.071 "num_base_bdevs": 3, 00:16:35.071 "num_base_bdevs_discovered": 3, 00:16:35.071 "num_base_bdevs_operational": 3, 00:16:35.071 "base_bdevs_list": [ 00:16:35.071 { 00:16:35.071 "name": "BaseBdev1", 00:16:35.071 "uuid": "f5ae92ef-15f6-473f-b80c-64fc85e3e7bf", 00:16:35.071 "is_configured": true, 00:16:35.071 "data_offset": 0, 00:16:35.071 "data_size": 65536 00:16:35.071 }, 00:16:35.071 { 00:16:35.071 "name": "BaseBdev2", 00:16:35.071 "uuid": "aa65d2ef-9645-4e90-b2a4-e7e57f56ea59", 00:16:35.071 "is_configured": true, 00:16:35.071 "data_offset": 0, 00:16:35.071 "data_size": 65536 00:16:35.071 }, 00:16:35.071 { 00:16:35.071 "name": "BaseBdev3", 00:16:35.071 "uuid": "f942c073-fa3c-4ec6-add4-e80e0e3c5b12", 00:16:35.071 "is_configured": true, 00:16:35.071 "data_offset": 0, 00:16:35.071 "data_size": 65536 00:16:35.071 } 00:16:35.071 ] 00:16:35.071 }' 00:16:35.071 04:56:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.071 04:56:58 -- common/autotest_common.sh@10 -- # set +x 00:16:35.330 04:56:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:35.588 [2024-11-18 04:56:59.059112] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.855 04:56:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:35.855 04:56:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:35.855 04:56:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.856 "name": "Existed_Raid", 00:16:35.856 "uuid": "5778b2aa-c11f-495e-82f7-334f9ff66648", 00:16:35.856 "strip_size_kb": 0, 00:16:35.856 "state": "online", 00:16:35.856 "raid_level": "raid1", 00:16:35.856 "superblock": false, 00:16:35.856 "num_base_bdevs": 3, 00:16:35.856 "num_base_bdevs_discovered": 2, 00:16:35.856 "num_base_bdevs_operational": 2, 00:16:35.856 "base_bdevs_list": [ 00:16:35.856 { 00:16:35.856 "name": null, 00:16:35.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.856 "is_configured": false, 00:16:35.856 "data_offset": 0, 00:16:35.856 "data_size": 65536 00:16:35.856 }, 00:16:35.856 { 00:16:35.856 "name": "BaseBdev2", 00:16:35.856 "uuid": "aa65d2ef-9645-4e90-b2a4-e7e57f56ea59", 00:16:35.856 "is_configured": true, 00:16:35.856 "data_offset": 0, 00:16:35.856 "data_size": 65536 00:16:35.856 }, 00:16:35.856 { 00:16:35.856 "name": "BaseBdev3", 00:16:35.856 "uuid": "f942c073-fa3c-4ec6-add4-e80e0e3c5b12", 00:16:35.856 "is_configured": true, 00:16:35.856 "data_offset": 0, 00:16:35.856 "data_size": 65536 00:16:35.856 } 00:16:35.856 ] 00:16:35.856 }' 00:16:35.856 04:56:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.856 04:56:59 -- common/autotest_common.sh@10 -- # set +x 00:16:36.118 04:56:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:36.118 04:56:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:36.118 04:56:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.118 04:56:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:36.376 04:56:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:36.376 04:56:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.376 04:56:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:36.634 [2024-11-18 04:57:00.088609] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.892 04:57:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:36.892 04:57:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:36.892 04:57:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.892 04:57:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:37.150 04:57:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:37.150 04:57:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.150 04:57:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:37.150 [2024-11-18 04:57:00.655096] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:37.150 [2024-11-18 04:57:00.655141] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.150 [2024-11-18 04:57:00.655199] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.409 [2024-11-18 04:57:00.730096] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.409 [2024-11-18 04:57:00.730416] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:16:37.409 04:57:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:37.409 04:57:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:37.409 04:57:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.409 04:57:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:37.668 04:57:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:37.668 04:57:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:37.668 04:57:01 -- bdev/bdev_raid.sh@287 -- # killprocess 73359 00:16:37.668 04:57:01 -- common/autotest_common.sh@936 -- # '[' -z 73359 ']' 00:16:37.668 04:57:01 -- common/autotest_common.sh@940 -- # kill -0 73359 00:16:37.668 04:57:01 -- common/autotest_common.sh@941 -- # uname 00:16:37.668 04:57:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.668 04:57:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73359 00:16:37.668 killing process with pid 73359 00:16:37.668 04:57:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:37.668 04:57:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:37.668 04:57:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73359' 00:16:37.668 04:57:01 -- common/autotest_common.sh@955 -- # kill 73359 00:16:37.668 [2024-11-18 04:57:01.035524] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.668 [2024-11-18 04:57:01.035667] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.668 04:57:01 -- common/autotest_common.sh@960 -- # wait 73359 00:16:38.603 ************************************ 00:16:38.603 END TEST raid_state_function_test 00:16:38.603 ************************************ 00:16:38.603 04:57:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:38.603 00:16:38.603 real 0m10.013s 00:16:38.603 user 0m16.543s 00:16:38.603 sys 0m1.460s 00:16:38.603 04:57:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:38.603 04:57:02 -- common/autotest_common.sh@10 -- # set +x 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:38.862 04:57:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:38.862 04:57:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.862 04:57:02 -- common/autotest_common.sh@10 -- # set +x 00:16:38.862 ************************************ 00:16:38.862 START TEST raid_state_function_test_sb 00:16:38.862 ************************************ 00:16:38.862 04:57:02 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 true 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:38.862 04:57:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=73695 00:16:38.863 Process raid pid: 73695 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 73695' 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 73695 /var/tmp/spdk-raid.sock 00:16:38.863 04:57:02 -- common/autotest_common.sh@829 -- # '[' -z 73695 ']' 00:16:38.863 04:57:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:38.863 04:57:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:38.863 04:57:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:38.863 04:57:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:38.863 04:57:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.863 04:57:02 -- common/autotest_common.sh@10 -- # set +x 00:16:38.863 [2024-11-18 04:57:02.221549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:38.863 [2024-11-18 04:57:02.221717] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.121 [2024-11-18 04:57:02.397405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.121 [2024-11-18 04:57:02.628925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.380 [2024-11-18 04:57:02.797879] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.946 04:57:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.946 04:57:03 -- common/autotest_common.sh@862 -- # return 0 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:39.946 [2024-11-18 04:57:03.404793] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.946 [2024-11-18 04:57:03.404865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.946 [2024-11-18 04:57:03.404881] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.946 [2024-11-18 04:57:03.404897] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.946 [2024-11-18 04:57:03.404907] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.946 [2024-11-18 04:57:03.404921] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.946 04:57:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.205 04:57:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.205 "name": "Existed_Raid", 00:16:40.205 "uuid": "7239ca56-32ee-4b77-8081-51e5661581de", 00:16:40.205 "strip_size_kb": 0, 00:16:40.205 "state": "configuring", 00:16:40.205 "raid_level": "raid1", 00:16:40.205 "superblock": true, 00:16:40.205 "num_base_bdevs": 3, 00:16:40.205 "num_base_bdevs_discovered": 0, 00:16:40.205 "num_base_bdevs_operational": 3, 00:16:40.205 "base_bdevs_list": [ 00:16:40.205 { 00:16:40.205 "name": "BaseBdev1", 00:16:40.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.205 "is_configured": false, 00:16:40.205 "data_offset": 0, 00:16:40.205 "data_size": 0 00:16:40.205 }, 00:16:40.205 { 00:16:40.205 "name": "BaseBdev2", 00:16:40.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.205 "is_configured": false, 00:16:40.205 "data_offset": 0, 00:16:40.205 "data_size": 0 00:16:40.205 }, 00:16:40.205 { 00:16:40.205 "name": "BaseBdev3", 00:16:40.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.205 "is_configured": false, 00:16:40.205 "data_offset": 0, 00:16:40.205 "data_size": 0 00:16:40.205 } 00:16:40.205 ] 00:16:40.205 }' 00:16:40.205 04:57:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.205 04:57:03 -- common/autotest_common.sh@10 -- # set +x 00:16:40.771 04:57:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:40.771 [2024-11-18 04:57:04.228842] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.771 [2024-11-18 04:57:04.228945] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:40.771 04:57:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:41.030 [2024-11-18 04:57:04.436976] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.030 [2024-11-18 04:57:04.437041] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.030 [2024-11-18 04:57:04.437055] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.030 [2024-11-18 04:57:04.437071] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.030 [2024-11-18 04:57:04.437080] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.030 [2024-11-18 04:57:04.437092] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.030 04:57:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:41.289 [2024-11-18 04:57:04.678252] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.289 BaseBdev1 00:16:41.289 04:57:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:41.289 04:57:04 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:41.289 04:57:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:41.289 04:57:04 -- common/autotest_common.sh@899 -- # local i 00:16:41.289 04:57:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:41.289 04:57:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:41.289 04:57:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.547 04:57:04 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:41.806 [ 00:16:41.806 { 00:16:41.806 "name": "BaseBdev1", 00:16:41.806 "aliases": [ 00:16:41.806 "2192f55b-f852-43a0-b610-96220d057374" 00:16:41.806 ], 00:16:41.806 "product_name": "Malloc disk", 00:16:41.806 "block_size": 512, 00:16:41.806 "num_blocks": 65536, 00:16:41.806 "uuid": "2192f55b-f852-43a0-b610-96220d057374", 00:16:41.806 "assigned_rate_limits": { 00:16:41.806 "rw_ios_per_sec": 0, 00:16:41.806 "rw_mbytes_per_sec": 0, 00:16:41.806 "r_mbytes_per_sec": 0, 00:16:41.806 "w_mbytes_per_sec": 0 00:16:41.806 }, 00:16:41.806 "claimed": true, 00:16:41.806 "claim_type": "exclusive_write", 00:16:41.806 "zoned": false, 00:16:41.806 "supported_io_types": { 00:16:41.806 "read": true, 00:16:41.806 "write": true, 00:16:41.806 "unmap": true, 00:16:41.806 "write_zeroes": true, 00:16:41.806 "flush": true, 00:16:41.806 "reset": true, 00:16:41.806 "compare": false, 00:16:41.806 "compare_and_write": false, 00:16:41.806 "abort": true, 00:16:41.806 "nvme_admin": false, 00:16:41.806 "nvme_io": false 00:16:41.806 }, 00:16:41.806 "memory_domains": [ 00:16:41.806 { 00:16:41.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.806 "dma_device_type": 2 00:16:41.806 } 00:16:41.806 ], 00:16:41.806 "driver_specific": {} 00:16:41.807 } 00:16:41.807 ] 00:16:41.807 04:57:05 -- common/autotest_common.sh@905 -- # return 0 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.807 04:57:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.066 04:57:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.066 "name": "Existed_Raid", 00:16:42.066 "uuid": "45e275bc-3397-43d7-9412-fa6f647e629c", 00:16:42.066 "strip_size_kb": 0, 00:16:42.066 "state": "configuring", 00:16:42.066 "raid_level": "raid1", 00:16:42.066 "superblock": true, 00:16:42.066 "num_base_bdevs": 3, 00:16:42.066 "num_base_bdevs_discovered": 1, 00:16:42.066 "num_base_bdevs_operational": 3, 00:16:42.066 "base_bdevs_list": [ 00:16:42.066 { 00:16:42.066 "name": "BaseBdev1", 00:16:42.066 "uuid": "2192f55b-f852-43a0-b610-96220d057374", 00:16:42.066 "is_configured": true, 00:16:42.066 "data_offset": 2048, 00:16:42.066 "data_size": 63488 00:16:42.066 }, 00:16:42.066 { 00:16:42.066 "name": "BaseBdev2", 00:16:42.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.066 "is_configured": false, 00:16:42.066 "data_offset": 0, 00:16:42.066 "data_size": 0 00:16:42.066 }, 00:16:42.066 { 00:16:42.066 "name": "BaseBdev3", 00:16:42.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.066 "is_configured": false, 00:16:42.066 "data_offset": 0, 00:16:42.066 "data_size": 0 00:16:42.066 } 00:16:42.066 ] 00:16:42.066 }' 00:16:42.066 04:57:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.066 04:57:05 -- common/autotest_common.sh@10 -- # set +x 00:16:42.325 04:57:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:42.584 [2024-11-18 04:57:05.938700] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.584 [2024-11-18 04:57:05.938775] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:42.584 04:57:05 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:42.584 04:57:05 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:42.842 04:57:06 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:43.101 BaseBdev1 00:16:43.101 04:57:06 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:43.101 04:57:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:43.101 04:57:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:43.101 04:57:06 -- common/autotest_common.sh@899 -- # local i 00:16:43.101 04:57:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:43.101 04:57:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:43.101 04:57:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.359 04:57:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:43.618 [ 00:16:43.618 { 00:16:43.618 "name": "BaseBdev1", 00:16:43.618 "aliases": [ 00:16:43.618 "756f7552-9103-4251-b0a3-83d157a7346e" 00:16:43.618 ], 00:16:43.618 "product_name": "Malloc disk", 00:16:43.618 "block_size": 512, 00:16:43.618 "num_blocks": 65536, 00:16:43.618 "uuid": "756f7552-9103-4251-b0a3-83d157a7346e", 00:16:43.618 "assigned_rate_limits": { 00:16:43.618 "rw_ios_per_sec": 0, 00:16:43.618 "rw_mbytes_per_sec": 0, 00:16:43.618 "r_mbytes_per_sec": 0, 00:16:43.618 "w_mbytes_per_sec": 0 00:16:43.618 }, 00:16:43.618 "claimed": false, 00:16:43.618 "zoned": false, 00:16:43.618 "supported_io_types": { 00:16:43.618 "read": true, 00:16:43.618 "write": true, 00:16:43.618 "unmap": true, 00:16:43.618 "write_zeroes": true, 00:16:43.618 "flush": true, 00:16:43.618 "reset": true, 00:16:43.618 "compare": false, 00:16:43.618 "compare_and_write": false, 00:16:43.618 "abort": true, 00:16:43.618 "nvme_admin": false, 00:16:43.618 "nvme_io": false 00:16:43.618 }, 00:16:43.618 "memory_domains": [ 00:16:43.618 { 00:16:43.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.618 "dma_device_type": 2 00:16:43.618 } 00:16:43.618 ], 00:16:43.618 "driver_specific": {} 00:16:43.618 } 00:16:43.618 ] 00:16:43.618 04:57:06 -- common/autotest_common.sh@905 -- # return 0 00:16:43.618 04:57:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:43.618 [2024-11-18 04:57:07.113691] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.618 [2024-11-18 04:57:07.115886] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.618 [2024-11-18 04:57:07.115952] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.618 [2024-11-18 04:57:07.115967] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:43.618 [2024-11-18 04:57:07.115982] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.618 04:57:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.877 04:57:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.877 "name": "Existed_Raid", 00:16:43.877 "uuid": "1cff8493-8550-49b3-b5a7-9af944e25c96", 00:16:43.877 "strip_size_kb": 0, 00:16:43.877 "state": "configuring", 00:16:43.877 "raid_level": "raid1", 00:16:43.877 "superblock": true, 00:16:43.877 "num_base_bdevs": 3, 00:16:43.877 "num_base_bdevs_discovered": 1, 00:16:43.877 "num_base_bdevs_operational": 3, 00:16:43.877 "base_bdevs_list": [ 00:16:43.877 { 00:16:43.877 "name": "BaseBdev1", 00:16:43.877 "uuid": "756f7552-9103-4251-b0a3-83d157a7346e", 00:16:43.877 "is_configured": true, 00:16:43.877 "data_offset": 2048, 00:16:43.877 "data_size": 63488 00:16:43.877 }, 00:16:43.877 { 00:16:43.877 "name": "BaseBdev2", 00:16:43.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.877 "is_configured": false, 00:16:43.877 "data_offset": 0, 00:16:43.877 "data_size": 0 00:16:43.877 }, 00:16:43.877 { 00:16:43.877 "name": "BaseBdev3", 00:16:43.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.877 "is_configured": false, 00:16:43.877 "data_offset": 0, 00:16:43.877 "data_size": 0 00:16:43.877 } 00:16:43.877 ] 00:16:43.877 }' 00:16:43.877 04:57:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.877 04:57:07 -- common/autotest_common.sh@10 -- # set +x 00:16:44.443 04:57:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:44.702 [2024-11-18 04:57:07.989150] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.702 BaseBdev2 00:16:44.702 04:57:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:44.702 04:57:08 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:44.702 04:57:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:44.702 04:57:08 -- common/autotest_common.sh@899 -- # local i 00:16:44.702 04:57:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:44.702 04:57:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:44.702 04:57:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.702 04:57:08 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:44.960 [ 00:16:44.960 { 00:16:44.960 "name": "BaseBdev2", 00:16:44.960 "aliases": [ 00:16:44.960 "aa0d8f10-e767-4375-a92a-12f232b5b25d" 00:16:44.960 ], 00:16:44.960 "product_name": "Malloc disk", 00:16:44.960 "block_size": 512, 00:16:44.960 "num_blocks": 65536, 00:16:44.960 "uuid": "aa0d8f10-e767-4375-a92a-12f232b5b25d", 00:16:44.960 "assigned_rate_limits": { 00:16:44.960 "rw_ios_per_sec": 0, 00:16:44.960 "rw_mbytes_per_sec": 0, 00:16:44.960 "r_mbytes_per_sec": 0, 00:16:44.960 "w_mbytes_per_sec": 0 00:16:44.960 }, 00:16:44.960 "claimed": true, 00:16:44.960 "claim_type": "exclusive_write", 00:16:44.960 "zoned": false, 00:16:44.960 "supported_io_types": { 00:16:44.960 "read": true, 00:16:44.960 "write": true, 00:16:44.960 "unmap": true, 00:16:44.960 "write_zeroes": true, 00:16:44.960 "flush": true, 00:16:44.960 "reset": true, 00:16:44.960 "compare": false, 00:16:44.961 "compare_and_write": false, 00:16:44.961 "abort": true, 00:16:44.961 "nvme_admin": false, 00:16:44.961 "nvme_io": false 00:16:44.961 }, 00:16:44.961 "memory_domains": [ 00:16:44.961 { 00:16:44.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.961 "dma_device_type": 2 00:16:44.961 } 00:16:44.961 ], 00:16:44.961 "driver_specific": {} 00:16:44.961 } 00:16:44.961 ] 00:16:44.961 04:57:08 -- common/autotest_common.sh@905 -- # return 0 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.961 04:57:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.219 04:57:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.219 "name": "Existed_Raid", 00:16:45.219 "uuid": "1cff8493-8550-49b3-b5a7-9af944e25c96", 00:16:45.219 "strip_size_kb": 0, 00:16:45.219 "state": "configuring", 00:16:45.219 "raid_level": "raid1", 00:16:45.219 "superblock": true, 00:16:45.219 "num_base_bdevs": 3, 00:16:45.219 "num_base_bdevs_discovered": 2, 00:16:45.219 "num_base_bdevs_operational": 3, 00:16:45.219 "base_bdevs_list": [ 00:16:45.219 { 00:16:45.219 "name": "BaseBdev1", 00:16:45.219 "uuid": "756f7552-9103-4251-b0a3-83d157a7346e", 00:16:45.219 "is_configured": true, 00:16:45.219 "data_offset": 2048, 00:16:45.219 "data_size": 63488 00:16:45.219 }, 00:16:45.219 { 00:16:45.219 "name": "BaseBdev2", 00:16:45.219 "uuid": "aa0d8f10-e767-4375-a92a-12f232b5b25d", 00:16:45.219 "is_configured": true, 00:16:45.219 "data_offset": 2048, 00:16:45.219 "data_size": 63488 00:16:45.219 }, 00:16:45.219 { 00:16:45.219 "name": "BaseBdev3", 00:16:45.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.219 "is_configured": false, 00:16:45.219 "data_offset": 0, 00:16:45.219 "data_size": 0 00:16:45.219 } 00:16:45.219 ] 00:16:45.219 }' 00:16:45.219 04:57:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.219 04:57:08 -- common/autotest_common.sh@10 -- # set +x 00:16:45.478 04:57:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:45.736 [2024-11-18 04:57:09.247139] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.736 [2024-11-18 04:57:09.247478] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:16:45.736 [2024-11-18 04:57:09.247503] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:45.736 [2024-11-18 04:57:09.247666] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:45.736 [2024-11-18 04:57:09.248058] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:16:45.736 [2024-11-18 04:57:09.248075] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:16:45.736 [2024-11-18 04:57:09.248241] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.736 BaseBdev3 00:16:45.994 04:57:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:45.994 04:57:09 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:45.994 04:57:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:45.994 04:57:09 -- common/autotest_common.sh@899 -- # local i 00:16:45.994 04:57:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:45.994 04:57:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:45.994 04:57:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.994 04:57:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:46.252 [ 00:16:46.252 { 00:16:46.252 "name": "BaseBdev3", 00:16:46.252 "aliases": [ 00:16:46.252 "d8d96e5d-8548-41dd-b63d-a9d25a029f69" 00:16:46.252 ], 00:16:46.252 "product_name": "Malloc disk", 00:16:46.252 "block_size": 512, 00:16:46.252 "num_blocks": 65536, 00:16:46.252 "uuid": "d8d96e5d-8548-41dd-b63d-a9d25a029f69", 00:16:46.252 "assigned_rate_limits": { 00:16:46.252 "rw_ios_per_sec": 0, 00:16:46.252 "rw_mbytes_per_sec": 0, 00:16:46.252 "r_mbytes_per_sec": 0, 00:16:46.252 "w_mbytes_per_sec": 0 00:16:46.252 }, 00:16:46.252 "claimed": true, 00:16:46.252 "claim_type": "exclusive_write", 00:16:46.252 "zoned": false, 00:16:46.252 "supported_io_types": { 00:16:46.252 "read": true, 00:16:46.252 "write": true, 00:16:46.252 "unmap": true, 00:16:46.252 "write_zeroes": true, 00:16:46.252 "flush": true, 00:16:46.252 "reset": true, 00:16:46.252 "compare": false, 00:16:46.252 "compare_and_write": false, 00:16:46.252 "abort": true, 00:16:46.252 "nvme_admin": false, 00:16:46.252 "nvme_io": false 00:16:46.252 }, 00:16:46.252 "memory_domains": [ 00:16:46.252 { 00:16:46.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.252 "dma_device_type": 2 00:16:46.252 } 00:16:46.252 ], 00:16:46.252 "driver_specific": {} 00:16:46.252 } 00:16:46.252 ] 00:16:46.252 04:57:09 -- common/autotest_common.sh@905 -- # return 0 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.252 04:57:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.253 04:57:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.511 04:57:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.511 "name": "Existed_Raid", 00:16:46.511 "uuid": "1cff8493-8550-49b3-b5a7-9af944e25c96", 00:16:46.511 "strip_size_kb": 0, 00:16:46.511 "state": "online", 00:16:46.511 "raid_level": "raid1", 00:16:46.511 "superblock": true, 00:16:46.511 "num_base_bdevs": 3, 00:16:46.511 "num_base_bdevs_discovered": 3, 00:16:46.511 "num_base_bdevs_operational": 3, 00:16:46.511 "base_bdevs_list": [ 00:16:46.511 { 00:16:46.511 "name": "BaseBdev1", 00:16:46.511 "uuid": "756f7552-9103-4251-b0a3-83d157a7346e", 00:16:46.511 "is_configured": true, 00:16:46.511 "data_offset": 2048, 00:16:46.511 "data_size": 63488 00:16:46.511 }, 00:16:46.511 { 00:16:46.511 "name": "BaseBdev2", 00:16:46.511 "uuid": "aa0d8f10-e767-4375-a92a-12f232b5b25d", 00:16:46.511 "is_configured": true, 00:16:46.511 "data_offset": 2048, 00:16:46.511 "data_size": 63488 00:16:46.511 }, 00:16:46.511 { 00:16:46.511 "name": "BaseBdev3", 00:16:46.511 "uuid": "d8d96e5d-8548-41dd-b63d-a9d25a029f69", 00:16:46.511 "is_configured": true, 00:16:46.511 "data_offset": 2048, 00:16:46.511 "data_size": 63488 00:16:46.511 } 00:16:46.511 ] 00:16:46.511 }' 00:16:46.511 04:57:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.511 04:57:09 -- common/autotest_common.sh@10 -- # set +x 00:16:46.769 04:57:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:47.028 [2024-11-18 04:57:10.459692] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.287 "name": "Existed_Raid", 00:16:47.287 "uuid": "1cff8493-8550-49b3-b5a7-9af944e25c96", 00:16:47.287 "strip_size_kb": 0, 00:16:47.287 "state": "online", 00:16:47.287 "raid_level": "raid1", 00:16:47.287 "superblock": true, 00:16:47.287 "num_base_bdevs": 3, 00:16:47.287 "num_base_bdevs_discovered": 2, 00:16:47.287 "num_base_bdevs_operational": 2, 00:16:47.287 "base_bdevs_list": [ 00:16:47.287 { 00:16:47.287 "name": null, 00:16:47.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.287 "is_configured": false, 00:16:47.287 "data_offset": 2048, 00:16:47.287 "data_size": 63488 00:16:47.287 }, 00:16:47.287 { 00:16:47.287 "name": "BaseBdev2", 00:16:47.287 "uuid": "aa0d8f10-e767-4375-a92a-12f232b5b25d", 00:16:47.287 "is_configured": true, 00:16:47.287 "data_offset": 2048, 00:16:47.287 "data_size": 63488 00:16:47.287 }, 00:16:47.287 { 00:16:47.287 "name": "BaseBdev3", 00:16:47.287 "uuid": "d8d96e5d-8548-41dd-b63d-a9d25a029f69", 00:16:47.287 "is_configured": true, 00:16:47.287 "data_offset": 2048, 00:16:47.287 "data_size": 63488 00:16:47.287 } 00:16:47.287 ] 00:16:47.287 }' 00:16:47.287 04:57:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.287 04:57:10 -- common/autotest_common.sh@10 -- # set +x 00:16:47.546 04:57:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:47.546 04:57:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:47.546 04:57:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.546 04:57:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:47.805 04:57:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:47.805 04:57:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:47.805 04:57:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:48.063 [2024-11-18 04:57:11.484720] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:48.063 04:57:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:48.063 04:57:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:48.063 04:57:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.321 04:57:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:48.321 04:57:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:48.321 04:57:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.321 04:57:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:48.581 [2024-11-18 04:57:12.021365] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:48.581 [2024-11-18 04:57:12.021400] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.581 [2024-11-18 04:57:12.021461] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.581 [2024-11-18 04:57:12.091566] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.581 [2024-11-18 04:57:12.091605] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:16:48.858 04:57:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:48.858 04:57:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:48.858 04:57:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.858 04:57:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:48.859 04:57:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:48.859 04:57:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:48.859 04:57:12 -- bdev/bdev_raid.sh@287 -- # killprocess 73695 00:16:48.859 04:57:12 -- common/autotest_common.sh@936 -- # '[' -z 73695 ']' 00:16:48.859 04:57:12 -- common/autotest_common.sh@940 -- # kill -0 73695 00:16:48.859 04:57:12 -- common/autotest_common.sh@941 -- # uname 00:16:48.859 04:57:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.859 04:57:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73695 00:16:49.125 killing process with pid 73695 00:16:49.125 04:57:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:49.125 04:57:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:49.125 04:57:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73695' 00:16:49.125 04:57:12 -- common/autotest_common.sh@955 -- # kill 73695 00:16:49.125 04:57:12 -- common/autotest_common.sh@960 -- # wait 73695 00:16:49.125 [2024-11-18 04:57:12.385346] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.125 [2024-11-18 04:57:12.385456] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:50.062 00:16:50.062 real 0m11.283s 00:16:50.062 user 0m18.813s 00:16:50.062 sys 0m1.647s 00:16:50.062 04:57:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:50.062 04:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:50.062 ************************************ 00:16:50.062 END TEST raid_state_function_test_sb 00:16:50.062 ************************************ 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:16:50.062 04:57:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:50.062 04:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.062 04:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:50.062 ************************************ 00:16:50.062 START TEST raid_superblock_test 00:16:50.062 ************************************ 00:16:50.062 04:57:13 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 3 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=74049 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:50.062 04:57:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 74049 /var/tmp/spdk-raid.sock 00:16:50.062 04:57:13 -- common/autotest_common.sh@829 -- # '[' -z 74049 ']' 00:16:50.062 04:57:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:50.062 04:57:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:50.062 04:57:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:50.062 04:57:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.062 04:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:50.062 [2024-11-18 04:57:13.561117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:50.062 [2024-11-18 04:57:13.561306] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74049 ] 00:16:50.321 [2024-11-18 04:57:13.733735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.580 [2024-11-18 04:57:13.910422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.580 [2024-11-18 04:57:14.084429] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.147 04:57:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.147 04:57:14 -- common/autotest_common.sh@862 -- # return 0 00:16:51.147 04:57:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:51.147 04:57:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:51.147 04:57:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:51.147 04:57:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:51.147 04:57:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:51.147 04:57:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:51.147 04:57:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:51.147 04:57:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:51.147 04:57:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:51.405 malloc1 00:16:51.405 04:57:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.663 [2024-11-18 04:57:14.949788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.663 [2024-11-18 04:57:14.949880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.663 [2024-11-18 04:57:14.949920] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:16:51.663 [2024-11-18 04:57:14.949935] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.663 [2024-11-18 04:57:14.952973] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.663 [2024-11-18 04:57:14.953030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.663 pt1 00:16:51.663 04:57:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:51.663 04:57:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:51.663 04:57:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:51.663 04:57:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:51.663 04:57:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:51.663 04:57:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:51.663 04:57:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:51.663 04:57:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:51.663 04:57:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:51.922 malloc2 00:16:51.922 04:57:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:52.180 [2024-11-18 04:57:15.453880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:52.180 [2024-11-18 04:57:15.454027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.181 [2024-11-18 04:57:15.454062] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:16:52.181 [2024-11-18 04:57:15.454077] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.181 [2024-11-18 04:57:15.456712] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.181 [2024-11-18 04:57:15.456755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:52.181 pt2 00:16:52.181 04:57:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:52.181 04:57:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:52.181 04:57:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:52.181 04:57:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:52.181 04:57:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:52.181 04:57:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:52.181 04:57:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:52.181 04:57:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:52.181 04:57:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:52.181 malloc3 00:16:52.439 04:57:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:52.439 [2024-11-18 04:57:15.882003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:52.439 [2024-11-18 04:57:15.882082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.439 [2024-11-18 04:57:15.882131] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:16:52.439 [2024-11-18 04:57:15.882145] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.439 [2024-11-18 04:57:15.884750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.439 [2024-11-18 04:57:15.884803] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:52.439 pt3 00:16:52.439 04:57:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:52.439 04:57:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:52.439 04:57:15 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:52.697 [2024-11-18 04:57:16.106044] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:52.697 [2024-11-18 04:57:16.108090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.697 [2024-11-18 04:57:16.108189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:52.697 [2024-11-18 04:57:16.108433] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:16:52.697 [2024-11-18 04:57:16.108453] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:52.697 [2024-11-18 04:57:16.108573] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:52.697 [2024-11-18 04:57:16.108943] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:16:52.697 [2024-11-18 04:57:16.108976] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:16:52.697 [2024-11-18 04:57:16.109134] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.697 04:57:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.956 04:57:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.956 "name": "raid_bdev1", 00:16:52.956 "uuid": "fd3f163b-ecbd-428e-a423-6c7f0d83dc11", 00:16:52.956 "strip_size_kb": 0, 00:16:52.956 "state": "online", 00:16:52.956 "raid_level": "raid1", 00:16:52.956 "superblock": true, 00:16:52.956 "num_base_bdevs": 3, 00:16:52.956 "num_base_bdevs_discovered": 3, 00:16:52.956 "num_base_bdevs_operational": 3, 00:16:52.956 "base_bdevs_list": [ 00:16:52.956 { 00:16:52.956 "name": "pt1", 00:16:52.956 "uuid": "cb2b0655-ce6a-5c65-9028-04ec283ce246", 00:16:52.956 "is_configured": true, 00:16:52.956 "data_offset": 2048, 00:16:52.956 "data_size": 63488 00:16:52.956 }, 00:16:52.956 { 00:16:52.956 "name": "pt2", 00:16:52.956 "uuid": "b9bb5301-c33c-5b6f-860c-00555d1976d6", 00:16:52.956 "is_configured": true, 00:16:52.956 "data_offset": 2048, 00:16:52.956 "data_size": 63488 00:16:52.956 }, 00:16:52.956 { 00:16:52.956 "name": "pt3", 00:16:52.956 "uuid": "61667893-22ed-59d6-9c8e-a15c0331ed84", 00:16:52.956 "is_configured": true, 00:16:52.956 "data_offset": 2048, 00:16:52.956 "data_size": 63488 00:16:52.956 } 00:16:52.956 ] 00:16:52.956 }' 00:16:52.956 04:57:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.956 04:57:16 -- common/autotest_common.sh@10 -- # set +x 00:16:53.214 04:57:16 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:53.214 04:57:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:53.472 [2024-11-18 04:57:16.886479] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.472 04:57:16 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=fd3f163b-ecbd-428e-a423-6c7f0d83dc11 00:16:53.472 04:57:16 -- bdev/bdev_raid.sh@380 -- # '[' -z fd3f163b-ecbd-428e-a423-6c7f0d83dc11 ']' 00:16:53.472 04:57:16 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:53.731 [2024-11-18 04:57:17.090304] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.731 [2024-11-18 04:57:17.090528] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.731 [2024-11-18 04:57:17.090642] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.731 [2024-11-18 04:57:17.090748] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.731 [2024-11-18 04:57:17.090767] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:16:53.731 04:57:17 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.731 04:57:17 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:53.990 04:57:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:53.990 04:57:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:53.990 04:57:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:53.990 04:57:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:54.249 04:57:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.249 04:57:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:54.508 04:57:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.508 04:57:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:54.508 04:57:18 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:54.508 04:57:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:54.768 04:57:18 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:54.768 04:57:18 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:54.768 04:57:18 -- common/autotest_common.sh@650 -- # local es=0 00:16:54.768 04:57:18 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:54.768 04:57:18 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.768 04:57:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.768 04:57:18 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.768 04:57:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.768 04:57:18 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.768 04:57:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.768 04:57:18 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.768 04:57:18 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:54.768 04:57:18 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:55.027 [2024-11-18 04:57:18.422579] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:55.027 [2024-11-18 04:57:18.424817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:55.027 [2024-11-18 04:57:18.424878] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:55.027 [2024-11-18 04:57:18.424943] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:55.027 [2024-11-18 04:57:18.425038] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:55.027 [2024-11-18 04:57:18.425073] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:55.027 [2024-11-18 04:57:18.425094] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:55.027 [2024-11-18 04:57:18.425109] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:16:55.027 request: 00:16:55.027 { 00:16:55.027 "name": "raid_bdev1", 00:16:55.027 "raid_level": "raid1", 00:16:55.027 "base_bdevs": [ 00:16:55.027 "malloc1", 00:16:55.027 "malloc2", 00:16:55.027 "malloc3" 00:16:55.027 ], 00:16:55.027 "superblock": false, 00:16:55.027 "method": "bdev_raid_create", 00:16:55.027 "req_id": 1 00:16:55.027 } 00:16:55.027 Got JSON-RPC error response 00:16:55.027 response: 00:16:55.027 { 00:16:55.027 "code": -17, 00:16:55.027 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:55.027 } 00:16:55.027 04:57:18 -- common/autotest_common.sh@653 -- # es=1 00:16:55.027 04:57:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:55.027 04:57:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:55.027 04:57:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:55.027 04:57:18 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.027 04:57:18 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:55.286 04:57:18 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:55.286 04:57:18 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:55.286 04:57:18 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:55.546 [2024-11-18 04:57:18.850654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:55.546 [2024-11-18 04:57:18.850747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.546 [2024-11-18 04:57:18.850777] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:16:55.546 [2024-11-18 04:57:18.850792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.546 [2024-11-18 04:57:18.853302] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.546 [2024-11-18 04:57:18.853349] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:55.546 [2024-11-18 04:57:18.853450] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:55.546 [2024-11-18 04:57:18.853513] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:55.546 pt1 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.546 04:57:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.805 04:57:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.805 "name": "raid_bdev1", 00:16:55.805 "uuid": "fd3f163b-ecbd-428e-a423-6c7f0d83dc11", 00:16:55.805 "strip_size_kb": 0, 00:16:55.805 "state": "configuring", 00:16:55.805 "raid_level": "raid1", 00:16:55.805 "superblock": true, 00:16:55.805 "num_base_bdevs": 3, 00:16:55.805 "num_base_bdevs_discovered": 1, 00:16:55.805 "num_base_bdevs_operational": 3, 00:16:55.805 "base_bdevs_list": [ 00:16:55.805 { 00:16:55.805 "name": "pt1", 00:16:55.805 "uuid": "cb2b0655-ce6a-5c65-9028-04ec283ce246", 00:16:55.805 "is_configured": true, 00:16:55.805 "data_offset": 2048, 00:16:55.805 "data_size": 63488 00:16:55.805 }, 00:16:55.805 { 00:16:55.805 "name": null, 00:16:55.805 "uuid": "b9bb5301-c33c-5b6f-860c-00555d1976d6", 00:16:55.805 "is_configured": false, 00:16:55.805 "data_offset": 2048, 00:16:55.805 "data_size": 63488 00:16:55.805 }, 00:16:55.805 { 00:16:55.805 "name": null, 00:16:55.805 "uuid": "61667893-22ed-59d6-9c8e-a15c0331ed84", 00:16:55.805 "is_configured": false, 00:16:55.805 "data_offset": 2048, 00:16:55.805 "data_size": 63488 00:16:55.805 } 00:16:55.805 ] 00:16:55.805 }' 00:16:55.805 04:57:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.805 04:57:19 -- common/autotest_common.sh@10 -- # set +x 00:16:56.065 04:57:19 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:56.065 04:57:19 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.065 [2024-11-18 04:57:19.558876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.065 [2024-11-18 04:57:19.558998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.065 [2024-11-18 04:57:19.559032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:16:56.065 [2024-11-18 04:57:19.559050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.065 [2024-11-18 04:57:19.559621] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.065 [2024-11-18 04:57:19.559709] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.065 [2024-11-18 04:57:19.559822] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:56.065 [2024-11-18 04:57:19.559865] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.065 pt2 00:16:56.065 04:57:19 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:56.324 [2024-11-18 04:57:19.806924] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.324 04:57:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.581 04:57:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.581 "name": "raid_bdev1", 00:16:56.581 "uuid": "fd3f163b-ecbd-428e-a423-6c7f0d83dc11", 00:16:56.581 "strip_size_kb": 0, 00:16:56.581 "state": "configuring", 00:16:56.581 "raid_level": "raid1", 00:16:56.581 "superblock": true, 00:16:56.581 "num_base_bdevs": 3, 00:16:56.581 "num_base_bdevs_discovered": 1, 00:16:56.581 "num_base_bdevs_operational": 3, 00:16:56.581 "base_bdevs_list": [ 00:16:56.581 { 00:16:56.581 "name": "pt1", 00:16:56.581 "uuid": "cb2b0655-ce6a-5c65-9028-04ec283ce246", 00:16:56.581 "is_configured": true, 00:16:56.581 "data_offset": 2048, 00:16:56.581 "data_size": 63488 00:16:56.581 }, 00:16:56.581 { 00:16:56.581 "name": null, 00:16:56.581 "uuid": "b9bb5301-c33c-5b6f-860c-00555d1976d6", 00:16:56.581 "is_configured": false, 00:16:56.581 "data_offset": 2048, 00:16:56.581 "data_size": 63488 00:16:56.581 }, 00:16:56.581 { 00:16:56.581 "name": null, 00:16:56.581 "uuid": "61667893-22ed-59d6-9c8e-a15c0331ed84", 00:16:56.581 "is_configured": false, 00:16:56.581 "data_offset": 2048, 00:16:56.581 "data_size": 63488 00:16:56.581 } 00:16:56.581 ] 00:16:56.581 }' 00:16:56.581 04:57:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.581 04:57:20 -- common/autotest_common.sh@10 -- # set +x 00:16:56.840 04:57:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:56.840 04:57:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:56.840 04:57:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.099 [2024-11-18 04:57:20.619151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.099 [2024-11-18 04:57:20.619322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.099 [2024-11-18 04:57:20.619373] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:16:57.099 [2024-11-18 04:57:20.619388] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.099 [2024-11-18 04:57:20.619925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.099 [2024-11-18 04:57:20.619957] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.099 [2024-11-18 04:57:20.620060] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:57.099 [2024-11-18 04:57:20.620088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.358 pt2 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:57.358 [2024-11-18 04:57:20.835254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:57.358 [2024-11-18 04:57:20.835388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.358 [2024-11-18 04:57:20.835430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:16:57.358 [2024-11-18 04:57:20.835446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.358 [2024-11-18 04:57:20.836000] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.358 [2024-11-18 04:57:20.836032] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:57.358 [2024-11-18 04:57:20.836138] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:57.358 [2024-11-18 04:57:20.836168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:57.358 [2024-11-18 04:57:20.836385] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:16:57.358 [2024-11-18 04:57:20.836403] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:57.358 [2024-11-18 04:57:20.836516] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:57.358 [2024-11-18 04:57:20.836885] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:16:57.358 [2024-11-18 04:57:20.836937] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:16:57.358 [2024-11-18 04:57:20.837082] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.358 pt3 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.358 04:57:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.617 04:57:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.617 "name": "raid_bdev1", 00:16:57.617 "uuid": "fd3f163b-ecbd-428e-a423-6c7f0d83dc11", 00:16:57.617 "strip_size_kb": 0, 00:16:57.617 "state": "online", 00:16:57.617 "raid_level": "raid1", 00:16:57.617 "superblock": true, 00:16:57.617 "num_base_bdevs": 3, 00:16:57.617 "num_base_bdevs_discovered": 3, 00:16:57.617 "num_base_bdevs_operational": 3, 00:16:57.617 "base_bdevs_list": [ 00:16:57.617 { 00:16:57.617 "name": "pt1", 00:16:57.617 "uuid": "cb2b0655-ce6a-5c65-9028-04ec283ce246", 00:16:57.617 "is_configured": true, 00:16:57.617 "data_offset": 2048, 00:16:57.617 "data_size": 63488 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "name": "pt2", 00:16:57.617 "uuid": "b9bb5301-c33c-5b6f-860c-00555d1976d6", 00:16:57.617 "is_configured": true, 00:16:57.617 "data_offset": 2048, 00:16:57.617 "data_size": 63488 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "name": "pt3", 00:16:57.617 "uuid": "61667893-22ed-59d6-9c8e-a15c0331ed84", 00:16:57.617 "is_configured": true, 00:16:57.617 "data_offset": 2048, 00:16:57.617 "data_size": 63488 00:16:57.617 } 00:16:57.617 ] 00:16:57.617 }' 00:16:57.617 04:57:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.617 04:57:21 -- common/autotest_common.sh@10 -- # set +x 00:16:58.185 04:57:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:58.185 04:57:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:58.185 [2024-11-18 04:57:21.607714] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.185 04:57:21 -- bdev/bdev_raid.sh@430 -- # '[' fd3f163b-ecbd-428e-a423-6c7f0d83dc11 '!=' fd3f163b-ecbd-428e-a423-6c7f0d83dc11 ']' 00:16:58.185 04:57:21 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:58.185 04:57:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:58.185 04:57:21 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:58.185 04:57:21 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:58.444 [2024-11-18 04:57:21.815560] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.444 04:57:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.703 04:57:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.703 "name": "raid_bdev1", 00:16:58.703 "uuid": "fd3f163b-ecbd-428e-a423-6c7f0d83dc11", 00:16:58.703 "strip_size_kb": 0, 00:16:58.703 "state": "online", 00:16:58.703 "raid_level": "raid1", 00:16:58.703 "superblock": true, 00:16:58.703 "num_base_bdevs": 3, 00:16:58.703 "num_base_bdevs_discovered": 2, 00:16:58.703 "num_base_bdevs_operational": 2, 00:16:58.703 "base_bdevs_list": [ 00:16:58.703 { 00:16:58.703 "name": null, 00:16:58.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.703 "is_configured": false, 00:16:58.703 "data_offset": 2048, 00:16:58.703 "data_size": 63488 00:16:58.703 }, 00:16:58.703 { 00:16:58.703 "name": "pt2", 00:16:58.703 "uuid": "b9bb5301-c33c-5b6f-860c-00555d1976d6", 00:16:58.703 "is_configured": true, 00:16:58.703 "data_offset": 2048, 00:16:58.703 "data_size": 63488 00:16:58.703 }, 00:16:58.703 { 00:16:58.703 "name": "pt3", 00:16:58.703 "uuid": "61667893-22ed-59d6-9c8e-a15c0331ed84", 00:16:58.703 "is_configured": true, 00:16:58.703 "data_offset": 2048, 00:16:58.703 "data_size": 63488 00:16:58.703 } 00:16:58.703 ] 00:16:58.703 }' 00:16:58.703 04:57:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.703 04:57:22 -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 04:57:22 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:59.220 [2024-11-18 04:57:22.551783] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.220 [2024-11-18 04:57:22.552043] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.220 [2024-11-18 04:57:22.552149] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.220 [2024-11-18 04:57:22.552271] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.220 [2024-11-18 04:57:22.552294] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:16:59.220 04:57:22 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.220 04:57:22 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:59.479 04:57:22 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:59.479 04:57:22 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:59.479 04:57:22 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:59.479 04:57:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:59.479 04:57:22 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:59.738 04:57:23 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:59.738 04:57:23 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:59.738 04:57:23 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.997 [2024-11-18 04:57:23.467960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.997 [2024-11-18 04:57:23.468052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.997 [2024-11-18 04:57:23.468082] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:16:59.997 [2024-11-18 04:57:23.468099] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.997 [2024-11-18 04:57:23.470425] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.997 [2024-11-18 04:57:23.470607] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.997 [2024-11-18 04:57:23.470718] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:59.997 [2024-11-18 04:57:23.470783] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.997 pt2 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.997 04:57:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.257 04:57:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.257 "name": "raid_bdev1", 00:17:00.257 "uuid": "fd3f163b-ecbd-428e-a423-6c7f0d83dc11", 00:17:00.257 "strip_size_kb": 0, 00:17:00.257 "state": "configuring", 00:17:00.257 "raid_level": "raid1", 00:17:00.257 "superblock": true, 00:17:00.257 "num_base_bdevs": 3, 00:17:00.257 "num_base_bdevs_discovered": 1, 00:17:00.257 "num_base_bdevs_operational": 2, 00:17:00.257 "base_bdevs_list": [ 00:17:00.257 { 00:17:00.257 "name": null, 00:17:00.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.257 "is_configured": false, 00:17:00.257 "data_offset": 2048, 00:17:00.257 "data_size": 63488 00:17:00.257 }, 00:17:00.257 { 00:17:00.257 "name": "pt2", 00:17:00.257 "uuid": "b9bb5301-c33c-5b6f-860c-00555d1976d6", 00:17:00.257 "is_configured": true, 00:17:00.257 "data_offset": 2048, 00:17:00.257 "data_size": 63488 00:17:00.257 }, 00:17:00.257 { 00:17:00.257 "name": null, 00:17:00.257 "uuid": "61667893-22ed-59d6-9c8e-a15c0331ed84", 00:17:00.257 "is_configured": false, 00:17:00.257 "data_offset": 2048, 00:17:00.257 "data_size": 63488 00:17:00.257 } 00:17:00.257 ] 00:17:00.257 }' 00:17:00.257 04:57:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.257 04:57:23 -- common/autotest_common.sh@10 -- # set +x 00:17:00.516 04:57:24 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:00.516 04:57:24 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:00.516 04:57:24 -- bdev/bdev_raid.sh@462 -- # i=2 00:17:00.516 04:57:24 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:00.774 [2024-11-18 04:57:24.200126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:00.775 [2024-11-18 04:57:24.200250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.775 [2024-11-18 04:57:24.200283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:17:00.775 [2024-11-18 04:57:24.200300] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.775 [2024-11-18 04:57:24.200814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.775 [2024-11-18 04:57:24.200850] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:00.775 [2024-11-18 04:57:24.200945] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:00.775 [2024-11-18 04:57:24.200993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:00.775 [2024-11-18 04:57:24.201113] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:17:00.775 [2024-11-18 04:57:24.201133] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:00.775 [2024-11-18 04:57:24.201223] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:00.775 [2024-11-18 04:57:24.201583] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:17:00.775 [2024-11-18 04:57:24.201600] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:17:00.775 [2024-11-18 04:57:24.201742] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.775 pt3 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.775 04:57:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.034 04:57:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.034 "name": "raid_bdev1", 00:17:01.034 "uuid": "fd3f163b-ecbd-428e-a423-6c7f0d83dc11", 00:17:01.034 "strip_size_kb": 0, 00:17:01.034 "state": "online", 00:17:01.034 "raid_level": "raid1", 00:17:01.034 "superblock": true, 00:17:01.034 "num_base_bdevs": 3, 00:17:01.034 "num_base_bdevs_discovered": 2, 00:17:01.034 "num_base_bdevs_operational": 2, 00:17:01.034 "base_bdevs_list": [ 00:17:01.034 { 00:17:01.034 "name": null, 00:17:01.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.034 "is_configured": false, 00:17:01.034 "data_offset": 2048, 00:17:01.034 "data_size": 63488 00:17:01.034 }, 00:17:01.034 { 00:17:01.034 "name": "pt2", 00:17:01.034 "uuid": "b9bb5301-c33c-5b6f-860c-00555d1976d6", 00:17:01.034 "is_configured": true, 00:17:01.034 "data_offset": 2048, 00:17:01.034 "data_size": 63488 00:17:01.034 }, 00:17:01.034 { 00:17:01.034 "name": "pt3", 00:17:01.034 "uuid": "61667893-22ed-59d6-9c8e-a15c0331ed84", 00:17:01.034 "is_configured": true, 00:17:01.034 "data_offset": 2048, 00:17:01.034 "data_size": 63488 00:17:01.034 } 00:17:01.034 ] 00:17:01.034 }' 00:17:01.034 04:57:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.034 04:57:24 -- common/autotest_common.sh@10 -- # set +x 00:17:01.292 04:57:24 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:17:01.293 04:57:24 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:01.551 [2024-11-18 04:57:25.004338] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.551 [2024-11-18 04:57:25.004595] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.551 [2024-11-18 04:57:25.004691] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.551 [2024-11-18 04:57:25.004768] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.551 [2024-11-18 04:57:25.004784] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:17:01.551 04:57:25 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.551 04:57:25 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:01.810 04:57:25 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:01.810 04:57:25 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:01.810 04:57:25 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.069 [2024-11-18 04:57:25.468468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.069 [2024-11-18 04:57:25.468560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.069 [2024-11-18 04:57:25.468626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:17:02.069 [2024-11-18 04:57:25.468641] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.069 [2024-11-18 04:57:25.471745] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.069 [2024-11-18 04:57:25.471791] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.069 [2024-11-18 04:57:25.471917] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:02.069 [2024-11-18 04:57:25.472012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.069 pt1 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.069 04:57:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.329 04:57:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:02.329 "name": "raid_bdev1", 00:17:02.329 "uuid": "fd3f163b-ecbd-428e-a423-6c7f0d83dc11", 00:17:02.329 "strip_size_kb": 0, 00:17:02.329 "state": "configuring", 00:17:02.329 "raid_level": "raid1", 00:17:02.329 "superblock": true, 00:17:02.329 "num_base_bdevs": 3, 00:17:02.329 "num_base_bdevs_discovered": 1, 00:17:02.329 "num_base_bdevs_operational": 3, 00:17:02.329 "base_bdevs_list": [ 00:17:02.329 { 00:17:02.329 "name": "pt1", 00:17:02.329 "uuid": "cb2b0655-ce6a-5c65-9028-04ec283ce246", 00:17:02.329 "is_configured": true, 00:17:02.329 "data_offset": 2048, 00:17:02.329 "data_size": 63488 00:17:02.329 }, 00:17:02.329 { 00:17:02.329 "name": null, 00:17:02.329 "uuid": "b9bb5301-c33c-5b6f-860c-00555d1976d6", 00:17:02.329 "is_configured": false, 00:17:02.329 "data_offset": 2048, 00:17:02.329 "data_size": 63488 00:17:02.329 }, 00:17:02.329 { 00:17:02.329 "name": null, 00:17:02.329 "uuid": "61667893-22ed-59d6-9c8e-a15c0331ed84", 00:17:02.329 "is_configured": false, 00:17:02.329 "data_offset": 2048, 00:17:02.329 "data_size": 63488 00:17:02.329 } 00:17:02.329 ] 00:17:02.329 }' 00:17:02.329 04:57:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:02.329 04:57:25 -- common/autotest_common.sh@10 -- # set +x 00:17:02.588 04:57:26 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:02.588 04:57:26 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:02.588 04:57:26 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:02.846 04:57:26 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:02.846 04:57:26 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:02.846 04:57:26 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:03.105 04:57:26 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:03.105 04:57:26 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:03.105 04:57:26 -- bdev/bdev_raid.sh@489 -- # i=2 00:17:03.105 04:57:26 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:03.364 [2024-11-18 04:57:26.628795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:03.364 [2024-11-18 04:57:26.628890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.364 [2024-11-18 04:57:26.628927] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:17:03.364 [2024-11-18 04:57:26.628943] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.364 [2024-11-18 04:57:26.629538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.364 [2024-11-18 04:57:26.629582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:03.364 [2024-11-18 04:57:26.629694] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:03.364 [2024-11-18 04:57:26.629713] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:03.364 [2024-11-18 04:57:26.629730] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.364 [2024-11-18 04:57:26.629756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b780 name raid_bdev1, state configuring 00:17:03.364 [2024-11-18 04:57:26.629831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:03.364 pt3 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.364 04:57:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.623 04:57:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.623 "name": "raid_bdev1", 00:17:03.623 "uuid": "fd3f163b-ecbd-428e-a423-6c7f0d83dc11", 00:17:03.623 "strip_size_kb": 0, 00:17:03.623 "state": "configuring", 00:17:03.623 "raid_level": "raid1", 00:17:03.623 "superblock": true, 00:17:03.623 "num_base_bdevs": 3, 00:17:03.623 "num_base_bdevs_discovered": 1, 00:17:03.623 "num_base_bdevs_operational": 2, 00:17:03.623 "base_bdevs_list": [ 00:17:03.623 { 00:17:03.623 "name": null, 00:17:03.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.623 "is_configured": false, 00:17:03.623 "data_offset": 2048, 00:17:03.623 "data_size": 63488 00:17:03.623 }, 00:17:03.623 { 00:17:03.623 "name": null, 00:17:03.623 "uuid": "b9bb5301-c33c-5b6f-860c-00555d1976d6", 00:17:03.623 "is_configured": false, 00:17:03.623 "data_offset": 2048, 00:17:03.623 "data_size": 63488 00:17:03.623 }, 00:17:03.623 { 00:17:03.623 "name": "pt3", 00:17:03.623 "uuid": "61667893-22ed-59d6-9c8e-a15c0331ed84", 00:17:03.623 "is_configured": true, 00:17:03.623 "data_offset": 2048, 00:17:03.623 "data_size": 63488 00:17:03.623 } 00:17:03.623 ] 00:17:03.623 }' 00:17:03.623 04:57:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.623 04:57:26 -- common/autotest_common.sh@10 -- # set +x 00:17:03.882 04:57:27 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:03.882 04:57:27 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:03.882 04:57:27 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.141 [2024-11-18 04:57:27.416969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.141 [2024-11-18 04:57:27.417067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.141 [2024-11-18 04:57:27.417098] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:17:04.141 [2024-11-18 04:57:27.417113] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.141 [2024-11-18 04:57:27.417695] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.141 [2024-11-18 04:57:27.417728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.141 [2024-11-18 04:57:27.417839] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:04.141 [2024-11-18 04:57:27.417871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.141 [2024-11-18 04:57:27.417995] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:17:04.141 [2024-11-18 04:57:27.418015] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.141 [2024-11-18 04:57:27.418138] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:17:04.141 [2024-11-18 04:57:27.418499] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:17:04.141 [2024-11-18 04:57:27.418516] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:17:04.141 [2024-11-18 04:57:27.418691] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.141 pt2 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.141 04:57:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.400 04:57:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.400 "name": "raid_bdev1", 00:17:04.400 "uuid": "fd3f163b-ecbd-428e-a423-6c7f0d83dc11", 00:17:04.400 "strip_size_kb": 0, 00:17:04.400 "state": "online", 00:17:04.400 "raid_level": "raid1", 00:17:04.400 "superblock": true, 00:17:04.400 "num_base_bdevs": 3, 00:17:04.400 "num_base_bdevs_discovered": 2, 00:17:04.400 "num_base_bdevs_operational": 2, 00:17:04.400 "base_bdevs_list": [ 00:17:04.400 { 00:17:04.400 "name": null, 00:17:04.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.400 "is_configured": false, 00:17:04.400 "data_offset": 2048, 00:17:04.400 "data_size": 63488 00:17:04.400 }, 00:17:04.400 { 00:17:04.400 "name": "pt2", 00:17:04.401 "uuid": "b9bb5301-c33c-5b6f-860c-00555d1976d6", 00:17:04.401 "is_configured": true, 00:17:04.401 "data_offset": 2048, 00:17:04.401 "data_size": 63488 00:17:04.401 }, 00:17:04.401 { 00:17:04.401 "name": "pt3", 00:17:04.401 "uuid": "61667893-22ed-59d6-9c8e-a15c0331ed84", 00:17:04.401 "is_configured": true, 00:17:04.401 "data_offset": 2048, 00:17:04.401 "data_size": 63488 00:17:04.401 } 00:17:04.401 ] 00:17:04.401 }' 00:17:04.401 04:57:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.401 04:57:27 -- common/autotest_common.sh@10 -- # set +x 00:17:04.659 04:57:27 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:04.659 04:57:27 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:04.918 [2024-11-18 04:57:28.221421] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.918 04:57:28 -- bdev/bdev_raid.sh@506 -- # '[' fd3f163b-ecbd-428e-a423-6c7f0d83dc11 '!=' fd3f163b-ecbd-428e-a423-6c7f0d83dc11 ']' 00:17:04.918 04:57:28 -- bdev/bdev_raid.sh@511 -- # killprocess 74049 00:17:04.918 04:57:28 -- common/autotest_common.sh@936 -- # '[' -z 74049 ']' 00:17:04.918 04:57:28 -- common/autotest_common.sh@940 -- # kill -0 74049 00:17:04.918 04:57:28 -- common/autotest_common.sh@941 -- # uname 00:17:04.918 04:57:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:04.918 04:57:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74049 00:17:04.918 killing process with pid 74049 00:17:04.918 04:57:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:04.918 04:57:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:04.918 04:57:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74049' 00:17:04.918 04:57:28 -- common/autotest_common.sh@955 -- # kill 74049 00:17:04.918 [2024-11-18 04:57:28.275929] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.918 04:57:28 -- common/autotest_common.sh@960 -- # wait 74049 00:17:04.918 [2024-11-18 04:57:28.276008] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.918 [2024-11-18 04:57:28.276094] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.918 [2024-11-18 04:57:28.276108] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:17:05.212 [2024-11-18 04:57:28.501800] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.183 ************************************ 00:17:06.183 END TEST raid_superblock_test 00:17:06.183 ************************************ 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:06.183 00:17:06.183 real 0m16.071s 00:17:06.183 user 0m27.776s 00:17:06.183 sys 0m2.338s 00:17:06.183 04:57:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:06.183 04:57:29 -- common/autotest_common.sh@10 -- # set +x 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:06.183 04:57:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:06.183 04:57:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.183 04:57:29 -- common/autotest_common.sh@10 -- # set +x 00:17:06.183 ************************************ 00:17:06.183 START TEST raid_state_function_test 00:17:06.183 ************************************ 00:17:06.183 04:57:29 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 false 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:06.183 04:57:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:06.183 Process raid pid: 74598 00:17:06.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:06.184 04:57:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=74598 00:17:06.184 04:57:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 74598' 00:17:06.184 04:57:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 74598 /var/tmp/spdk-raid.sock 00:17:06.184 04:57:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:06.184 04:57:29 -- common/autotest_common.sh@829 -- # '[' -z 74598 ']' 00:17:06.184 04:57:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:06.184 04:57:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.184 04:57:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:06.184 04:57:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.184 04:57:29 -- common/autotest_common.sh@10 -- # set +x 00:17:06.184 [2024-11-18 04:57:29.682556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:06.184 [2024-11-18 04:57:29.682877] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.442 [2024-11-18 04:57:29.840244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.702 [2024-11-18 04:57:30.016110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.702 [2024-11-18 04:57:30.186871] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.270 04:57:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.270 04:57:30 -- common/autotest_common.sh@862 -- # return 0 00:17:07.270 04:57:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:07.530 [2024-11-18 04:57:30.819781] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.530 [2024-11-18 04:57:30.820052] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.530 [2024-11-18 04:57:30.820080] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.530 [2024-11-18 04:57:30.820098] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.530 [2024-11-18 04:57:30.820108] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:07.530 [2024-11-18 04:57:30.820120] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:07.530 [2024-11-18 04:57:30.820128] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:07.530 [2024-11-18 04:57:30.820140] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.530 04:57:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.530 04:57:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.530 "name": "Existed_Raid", 00:17:07.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.530 "strip_size_kb": 64, 00:17:07.530 "state": "configuring", 00:17:07.530 "raid_level": "raid0", 00:17:07.530 "superblock": false, 00:17:07.530 "num_base_bdevs": 4, 00:17:07.530 "num_base_bdevs_discovered": 0, 00:17:07.530 "num_base_bdevs_operational": 4, 00:17:07.530 "base_bdevs_list": [ 00:17:07.530 { 00:17:07.530 "name": "BaseBdev1", 00:17:07.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.530 "is_configured": false, 00:17:07.530 "data_offset": 0, 00:17:07.530 "data_size": 0 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "name": "BaseBdev2", 00:17:07.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.530 "is_configured": false, 00:17:07.530 "data_offset": 0, 00:17:07.530 "data_size": 0 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "name": "BaseBdev3", 00:17:07.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.530 "is_configured": false, 00:17:07.530 "data_offset": 0, 00:17:07.530 "data_size": 0 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "name": "BaseBdev4", 00:17:07.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.530 "is_configured": false, 00:17:07.530 "data_offset": 0, 00:17:07.530 "data_size": 0 00:17:07.530 } 00:17:07.530 ] 00:17:07.530 }' 00:17:07.530 04:57:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.530 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:17:08.099 04:57:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:08.100 [2024-11-18 04:57:31.571881] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:08.100 [2024-11-18 04:57:31.572108] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:08.100 04:57:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:08.360 [2024-11-18 04:57:31.771965] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:08.360 [2024-11-18 04:57:31.772221] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:08.360 [2024-11-18 04:57:31.772248] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.360 [2024-11-18 04:57:31.772265] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.360 [2024-11-18 04:57:31.772275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:08.360 [2024-11-18 04:57:31.772287] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:08.360 [2024-11-18 04:57:31.772295] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:08.360 [2024-11-18 04:57:31.772307] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:08.360 04:57:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:08.619 BaseBdev1 00:17:08.619 [2024-11-18 04:57:32.000270] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.619 04:57:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:08.619 04:57:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:08.619 04:57:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:08.619 04:57:32 -- common/autotest_common.sh@899 -- # local i 00:17:08.619 04:57:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:08.619 04:57:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:08.619 04:57:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.878 04:57:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:09.138 [ 00:17:09.138 { 00:17:09.138 "name": "BaseBdev1", 00:17:09.138 "aliases": [ 00:17:09.138 "edef67b4-0fde-4b57-bfde-0daae2107d7f" 00:17:09.138 ], 00:17:09.138 "product_name": "Malloc disk", 00:17:09.138 "block_size": 512, 00:17:09.138 "num_blocks": 65536, 00:17:09.138 "uuid": "edef67b4-0fde-4b57-bfde-0daae2107d7f", 00:17:09.138 "assigned_rate_limits": { 00:17:09.138 "rw_ios_per_sec": 0, 00:17:09.138 "rw_mbytes_per_sec": 0, 00:17:09.138 "r_mbytes_per_sec": 0, 00:17:09.138 "w_mbytes_per_sec": 0 00:17:09.138 }, 00:17:09.138 "claimed": true, 00:17:09.138 "claim_type": "exclusive_write", 00:17:09.138 "zoned": false, 00:17:09.138 "supported_io_types": { 00:17:09.138 "read": true, 00:17:09.138 "write": true, 00:17:09.138 "unmap": true, 00:17:09.138 "write_zeroes": true, 00:17:09.138 "flush": true, 00:17:09.138 "reset": true, 00:17:09.138 "compare": false, 00:17:09.138 "compare_and_write": false, 00:17:09.138 "abort": true, 00:17:09.138 "nvme_admin": false, 00:17:09.138 "nvme_io": false 00:17:09.138 }, 00:17:09.138 "memory_domains": [ 00:17:09.138 { 00:17:09.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.138 "dma_device_type": 2 00:17:09.138 } 00:17:09.138 ], 00:17:09.138 "driver_specific": {} 00:17:09.138 } 00:17:09.138 ] 00:17:09.138 04:57:32 -- common/autotest_common.sh@905 -- # return 0 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.138 04:57:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.397 04:57:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.397 "name": "Existed_Raid", 00:17:09.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.397 "strip_size_kb": 64, 00:17:09.397 "state": "configuring", 00:17:09.397 "raid_level": "raid0", 00:17:09.397 "superblock": false, 00:17:09.397 "num_base_bdevs": 4, 00:17:09.397 "num_base_bdevs_discovered": 1, 00:17:09.397 "num_base_bdevs_operational": 4, 00:17:09.397 "base_bdevs_list": [ 00:17:09.397 { 00:17:09.397 "name": "BaseBdev1", 00:17:09.397 "uuid": "edef67b4-0fde-4b57-bfde-0daae2107d7f", 00:17:09.397 "is_configured": true, 00:17:09.397 "data_offset": 0, 00:17:09.397 "data_size": 65536 00:17:09.397 }, 00:17:09.397 { 00:17:09.397 "name": "BaseBdev2", 00:17:09.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.397 "is_configured": false, 00:17:09.397 "data_offset": 0, 00:17:09.397 "data_size": 0 00:17:09.397 }, 00:17:09.397 { 00:17:09.397 "name": "BaseBdev3", 00:17:09.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.397 "is_configured": false, 00:17:09.397 "data_offset": 0, 00:17:09.397 "data_size": 0 00:17:09.397 }, 00:17:09.397 { 00:17:09.397 "name": "BaseBdev4", 00:17:09.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.397 "is_configured": false, 00:17:09.397 "data_offset": 0, 00:17:09.397 "data_size": 0 00:17:09.397 } 00:17:09.397 ] 00:17:09.397 }' 00:17:09.397 04:57:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.397 04:57:32 -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 04:57:32 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:09.915 [2024-11-18 04:57:33.228670] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:09.915 [2024-11-18 04:57:33.228726] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:09.915 04:57:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:09.915 04:57:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:09.915 [2024-11-18 04:57:33.428750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.915 [2024-11-18 04:57:33.430701] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.915 [2024-11-18 04:57:33.430769] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.915 [2024-11-18 04:57:33.430783] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.915 [2024-11-18 04:57:33.430798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.915 [2024-11-18 04:57:33.430806] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:09.915 [2024-11-18 04:57:33.430820] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.174 04:57:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.174 "name": "Existed_Raid", 00:17:10.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.174 "strip_size_kb": 64, 00:17:10.174 "state": "configuring", 00:17:10.174 "raid_level": "raid0", 00:17:10.174 "superblock": false, 00:17:10.174 "num_base_bdevs": 4, 00:17:10.174 "num_base_bdevs_discovered": 1, 00:17:10.174 "num_base_bdevs_operational": 4, 00:17:10.174 "base_bdevs_list": [ 00:17:10.174 { 00:17:10.174 "name": "BaseBdev1", 00:17:10.174 "uuid": "edef67b4-0fde-4b57-bfde-0daae2107d7f", 00:17:10.174 "is_configured": true, 00:17:10.174 "data_offset": 0, 00:17:10.174 "data_size": 65536 00:17:10.175 }, 00:17:10.175 { 00:17:10.175 "name": "BaseBdev2", 00:17:10.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.175 "is_configured": false, 00:17:10.175 "data_offset": 0, 00:17:10.175 "data_size": 0 00:17:10.175 }, 00:17:10.175 { 00:17:10.175 "name": "BaseBdev3", 00:17:10.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.175 "is_configured": false, 00:17:10.175 "data_offset": 0, 00:17:10.175 "data_size": 0 00:17:10.175 }, 00:17:10.175 { 00:17:10.175 "name": "BaseBdev4", 00:17:10.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.175 "is_configured": false, 00:17:10.175 "data_offset": 0, 00:17:10.175 "data_size": 0 00:17:10.175 } 00:17:10.175 ] 00:17:10.175 }' 00:17:10.175 04:57:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.175 04:57:33 -- common/autotest_common.sh@10 -- # set +x 00:17:10.743 04:57:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.743 [2024-11-18 04:57:34.234583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.743 BaseBdev2 00:17:10.743 04:57:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:10.743 04:57:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:10.743 04:57:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:10.743 04:57:34 -- common/autotest_common.sh@899 -- # local i 00:17:10.743 04:57:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:10.743 04:57:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:10.743 04:57:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.002 04:57:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:11.261 [ 00:17:11.262 { 00:17:11.262 "name": "BaseBdev2", 00:17:11.262 "aliases": [ 00:17:11.262 "7bec359a-aed6-4811-85d7-bcc5fe2aa46a" 00:17:11.262 ], 00:17:11.262 "product_name": "Malloc disk", 00:17:11.262 "block_size": 512, 00:17:11.262 "num_blocks": 65536, 00:17:11.262 "uuid": "7bec359a-aed6-4811-85d7-bcc5fe2aa46a", 00:17:11.262 "assigned_rate_limits": { 00:17:11.262 "rw_ios_per_sec": 0, 00:17:11.262 "rw_mbytes_per_sec": 0, 00:17:11.262 "r_mbytes_per_sec": 0, 00:17:11.262 "w_mbytes_per_sec": 0 00:17:11.262 }, 00:17:11.262 "claimed": true, 00:17:11.262 "claim_type": "exclusive_write", 00:17:11.262 "zoned": false, 00:17:11.262 "supported_io_types": { 00:17:11.262 "read": true, 00:17:11.262 "write": true, 00:17:11.262 "unmap": true, 00:17:11.262 "write_zeroes": true, 00:17:11.262 "flush": true, 00:17:11.262 "reset": true, 00:17:11.262 "compare": false, 00:17:11.262 "compare_and_write": false, 00:17:11.262 "abort": true, 00:17:11.262 "nvme_admin": false, 00:17:11.262 "nvme_io": false 00:17:11.262 }, 00:17:11.262 "memory_domains": [ 00:17:11.262 { 00:17:11.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.262 "dma_device_type": 2 00:17:11.262 } 00:17:11.262 ], 00:17:11.262 "driver_specific": {} 00:17:11.262 } 00:17:11.262 ] 00:17:11.262 04:57:34 -- common/autotest_common.sh@905 -- # return 0 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.262 04:57:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.521 04:57:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.521 "name": "Existed_Raid", 00:17:11.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.521 "strip_size_kb": 64, 00:17:11.521 "state": "configuring", 00:17:11.521 "raid_level": "raid0", 00:17:11.521 "superblock": false, 00:17:11.521 "num_base_bdevs": 4, 00:17:11.521 "num_base_bdevs_discovered": 2, 00:17:11.521 "num_base_bdevs_operational": 4, 00:17:11.521 "base_bdevs_list": [ 00:17:11.521 { 00:17:11.521 "name": "BaseBdev1", 00:17:11.521 "uuid": "edef67b4-0fde-4b57-bfde-0daae2107d7f", 00:17:11.521 "is_configured": true, 00:17:11.521 "data_offset": 0, 00:17:11.521 "data_size": 65536 00:17:11.521 }, 00:17:11.521 { 00:17:11.521 "name": "BaseBdev2", 00:17:11.521 "uuid": "7bec359a-aed6-4811-85d7-bcc5fe2aa46a", 00:17:11.521 "is_configured": true, 00:17:11.521 "data_offset": 0, 00:17:11.521 "data_size": 65536 00:17:11.521 }, 00:17:11.521 { 00:17:11.521 "name": "BaseBdev3", 00:17:11.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.521 "is_configured": false, 00:17:11.521 "data_offset": 0, 00:17:11.521 "data_size": 0 00:17:11.521 }, 00:17:11.521 { 00:17:11.521 "name": "BaseBdev4", 00:17:11.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.521 "is_configured": false, 00:17:11.521 "data_offset": 0, 00:17:11.521 "data_size": 0 00:17:11.521 } 00:17:11.521 ] 00:17:11.521 }' 00:17:11.521 04:57:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.521 04:57:34 -- common/autotest_common.sh@10 -- # set +x 00:17:11.781 04:57:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:12.041 [2024-11-18 04:57:35.414090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.041 BaseBdev3 00:17:12.041 04:57:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:12.041 04:57:35 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:12.041 04:57:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:12.041 04:57:35 -- common/autotest_common.sh@899 -- # local i 00:17:12.041 04:57:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:12.041 04:57:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:12.041 04:57:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:12.300 04:57:35 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:12.300 [ 00:17:12.300 { 00:17:12.300 "name": "BaseBdev3", 00:17:12.300 "aliases": [ 00:17:12.300 "2e5dd763-de13-4e28-ae03-a3ad23bf6595" 00:17:12.300 ], 00:17:12.300 "product_name": "Malloc disk", 00:17:12.300 "block_size": 512, 00:17:12.300 "num_blocks": 65536, 00:17:12.300 "uuid": "2e5dd763-de13-4e28-ae03-a3ad23bf6595", 00:17:12.300 "assigned_rate_limits": { 00:17:12.300 "rw_ios_per_sec": 0, 00:17:12.300 "rw_mbytes_per_sec": 0, 00:17:12.300 "r_mbytes_per_sec": 0, 00:17:12.300 "w_mbytes_per_sec": 0 00:17:12.300 }, 00:17:12.300 "claimed": true, 00:17:12.300 "claim_type": "exclusive_write", 00:17:12.300 "zoned": false, 00:17:12.300 "supported_io_types": { 00:17:12.300 "read": true, 00:17:12.300 "write": true, 00:17:12.300 "unmap": true, 00:17:12.300 "write_zeroes": true, 00:17:12.300 "flush": true, 00:17:12.300 "reset": true, 00:17:12.300 "compare": false, 00:17:12.300 "compare_and_write": false, 00:17:12.300 "abort": true, 00:17:12.300 "nvme_admin": false, 00:17:12.300 "nvme_io": false 00:17:12.300 }, 00:17:12.300 "memory_domains": [ 00:17:12.300 { 00:17:12.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.300 "dma_device_type": 2 00:17:12.300 } 00:17:12.300 ], 00:17:12.301 "driver_specific": {} 00:17:12.301 } 00:17:12.301 ] 00:17:12.560 04:57:35 -- common/autotest_common.sh@905 -- # return 0 00:17:12.560 04:57:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:12.560 04:57:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:12.560 04:57:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:12.560 04:57:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.560 04:57:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.560 04:57:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:12.560 04:57:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:12.560 04:57:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:12.561 04:57:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.561 04:57:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.561 04:57:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.561 04:57:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.561 04:57:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.561 04:57:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.561 04:57:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.561 "name": "Existed_Raid", 00:17:12.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.561 "strip_size_kb": 64, 00:17:12.561 "state": "configuring", 00:17:12.561 "raid_level": "raid0", 00:17:12.561 "superblock": false, 00:17:12.561 "num_base_bdevs": 4, 00:17:12.561 "num_base_bdevs_discovered": 3, 00:17:12.561 "num_base_bdevs_operational": 4, 00:17:12.561 "base_bdevs_list": [ 00:17:12.561 { 00:17:12.561 "name": "BaseBdev1", 00:17:12.561 "uuid": "edef67b4-0fde-4b57-bfde-0daae2107d7f", 00:17:12.561 "is_configured": true, 00:17:12.561 "data_offset": 0, 00:17:12.561 "data_size": 65536 00:17:12.561 }, 00:17:12.561 { 00:17:12.561 "name": "BaseBdev2", 00:17:12.561 "uuid": "7bec359a-aed6-4811-85d7-bcc5fe2aa46a", 00:17:12.561 "is_configured": true, 00:17:12.561 "data_offset": 0, 00:17:12.561 "data_size": 65536 00:17:12.561 }, 00:17:12.561 { 00:17:12.561 "name": "BaseBdev3", 00:17:12.561 "uuid": "2e5dd763-de13-4e28-ae03-a3ad23bf6595", 00:17:12.561 "is_configured": true, 00:17:12.561 "data_offset": 0, 00:17:12.561 "data_size": 65536 00:17:12.561 }, 00:17:12.561 { 00:17:12.561 "name": "BaseBdev4", 00:17:12.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.561 "is_configured": false, 00:17:12.561 "data_offset": 0, 00:17:12.561 "data_size": 0 00:17:12.561 } 00:17:12.561 ] 00:17:12.561 }' 00:17:12.561 04:57:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.561 04:57:36 -- common/autotest_common.sh@10 -- # set +x 00:17:13.129 04:57:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:13.129 [2024-11-18 04:57:36.604637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:13.129 [2024-11-18 04:57:36.604927] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:17:13.129 [2024-11-18 04:57:36.604987] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:13.129 [2024-11-18 04:57:36.605264] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:13.129 [2024-11-18 04:57:36.605749] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:17:13.129 [2024-11-18 04:57:36.605777] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:17:13.129 [2024-11-18 04:57:36.606052] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.129 BaseBdev4 00:17:13.129 04:57:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:13.129 04:57:36 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:13.129 04:57:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:13.129 04:57:36 -- common/autotest_common.sh@899 -- # local i 00:17:13.129 04:57:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:13.129 04:57:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:13.129 04:57:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.389 04:57:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:13.648 [ 00:17:13.648 { 00:17:13.648 "name": "BaseBdev4", 00:17:13.648 "aliases": [ 00:17:13.648 "ca5d140a-d5e1-46b9-b681-af2fcee7a7b9" 00:17:13.648 ], 00:17:13.648 "product_name": "Malloc disk", 00:17:13.648 "block_size": 512, 00:17:13.648 "num_blocks": 65536, 00:17:13.648 "uuid": "ca5d140a-d5e1-46b9-b681-af2fcee7a7b9", 00:17:13.648 "assigned_rate_limits": { 00:17:13.648 "rw_ios_per_sec": 0, 00:17:13.648 "rw_mbytes_per_sec": 0, 00:17:13.648 "r_mbytes_per_sec": 0, 00:17:13.648 "w_mbytes_per_sec": 0 00:17:13.648 }, 00:17:13.648 "claimed": true, 00:17:13.648 "claim_type": "exclusive_write", 00:17:13.648 "zoned": false, 00:17:13.648 "supported_io_types": { 00:17:13.648 "read": true, 00:17:13.648 "write": true, 00:17:13.648 "unmap": true, 00:17:13.648 "write_zeroes": true, 00:17:13.648 "flush": true, 00:17:13.648 "reset": true, 00:17:13.648 "compare": false, 00:17:13.648 "compare_and_write": false, 00:17:13.649 "abort": true, 00:17:13.649 "nvme_admin": false, 00:17:13.649 "nvme_io": false 00:17:13.649 }, 00:17:13.649 "memory_domains": [ 00:17:13.649 { 00:17:13.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.649 "dma_device_type": 2 00:17:13.649 } 00:17:13.649 ], 00:17:13.649 "driver_specific": {} 00:17:13.649 } 00:17:13.649 ] 00:17:13.649 04:57:37 -- common/autotest_common.sh@905 -- # return 0 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.649 04:57:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.908 04:57:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.908 "name": "Existed_Raid", 00:17:13.908 "uuid": "64c9029e-4c27-48cf-9f8e-ea98ea5045d7", 00:17:13.908 "strip_size_kb": 64, 00:17:13.908 "state": "online", 00:17:13.908 "raid_level": "raid0", 00:17:13.908 "superblock": false, 00:17:13.908 "num_base_bdevs": 4, 00:17:13.908 "num_base_bdevs_discovered": 4, 00:17:13.908 "num_base_bdevs_operational": 4, 00:17:13.908 "base_bdevs_list": [ 00:17:13.908 { 00:17:13.908 "name": "BaseBdev1", 00:17:13.908 "uuid": "edef67b4-0fde-4b57-bfde-0daae2107d7f", 00:17:13.908 "is_configured": true, 00:17:13.908 "data_offset": 0, 00:17:13.908 "data_size": 65536 00:17:13.908 }, 00:17:13.908 { 00:17:13.908 "name": "BaseBdev2", 00:17:13.908 "uuid": "7bec359a-aed6-4811-85d7-bcc5fe2aa46a", 00:17:13.908 "is_configured": true, 00:17:13.908 "data_offset": 0, 00:17:13.908 "data_size": 65536 00:17:13.908 }, 00:17:13.908 { 00:17:13.908 "name": "BaseBdev3", 00:17:13.908 "uuid": "2e5dd763-de13-4e28-ae03-a3ad23bf6595", 00:17:13.908 "is_configured": true, 00:17:13.908 "data_offset": 0, 00:17:13.908 "data_size": 65536 00:17:13.908 }, 00:17:13.908 { 00:17:13.908 "name": "BaseBdev4", 00:17:13.908 "uuid": "ca5d140a-d5e1-46b9-b681-af2fcee7a7b9", 00:17:13.908 "is_configured": true, 00:17:13.908 "data_offset": 0, 00:17:13.908 "data_size": 65536 00:17:13.908 } 00:17:13.908 ] 00:17:13.908 }' 00:17:13.908 04:57:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.908 04:57:37 -- common/autotest_common.sh@10 -- # set +x 00:17:14.168 04:57:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:14.427 [2024-11-18 04:57:37.841111] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:14.427 [2024-11-18 04:57:37.841372] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.427 [2024-11-18 04:57:37.841550] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.427 04:57:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.686 04:57:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.686 "name": "Existed_Raid", 00:17:14.686 "uuid": "64c9029e-4c27-48cf-9f8e-ea98ea5045d7", 00:17:14.686 "strip_size_kb": 64, 00:17:14.686 "state": "offline", 00:17:14.686 "raid_level": "raid0", 00:17:14.686 "superblock": false, 00:17:14.686 "num_base_bdevs": 4, 00:17:14.686 "num_base_bdevs_discovered": 3, 00:17:14.686 "num_base_bdevs_operational": 3, 00:17:14.686 "base_bdevs_list": [ 00:17:14.686 { 00:17:14.686 "name": null, 00:17:14.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.686 "is_configured": false, 00:17:14.686 "data_offset": 0, 00:17:14.686 "data_size": 65536 00:17:14.686 }, 00:17:14.686 { 00:17:14.686 "name": "BaseBdev2", 00:17:14.686 "uuid": "7bec359a-aed6-4811-85d7-bcc5fe2aa46a", 00:17:14.686 "is_configured": true, 00:17:14.686 "data_offset": 0, 00:17:14.686 "data_size": 65536 00:17:14.686 }, 00:17:14.686 { 00:17:14.686 "name": "BaseBdev3", 00:17:14.686 "uuid": "2e5dd763-de13-4e28-ae03-a3ad23bf6595", 00:17:14.686 "is_configured": true, 00:17:14.686 "data_offset": 0, 00:17:14.686 "data_size": 65536 00:17:14.686 }, 00:17:14.686 { 00:17:14.686 "name": "BaseBdev4", 00:17:14.686 "uuid": "ca5d140a-d5e1-46b9-b681-af2fcee7a7b9", 00:17:14.686 "is_configured": true, 00:17:14.686 "data_offset": 0, 00:17:14.686 "data_size": 65536 00:17:14.686 } 00:17:14.686 ] 00:17:14.686 }' 00:17:14.686 04:57:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.686 04:57:38 -- common/autotest_common.sh@10 -- # set +x 00:17:14.945 04:57:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:14.945 04:57:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:14.945 04:57:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.945 04:57:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:15.204 04:57:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:15.204 04:57:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:15.204 04:57:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:15.463 [2024-11-18 04:57:38.869454] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:15.463 04:57:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:15.463 04:57:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:15.463 04:57:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.463 04:57:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:15.722 04:57:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:15.722 04:57:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:15.722 04:57:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:15.981 [2024-11-18 04:57:39.423558] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:16.241 04:57:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:16.241 04:57:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:16.241 04:57:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.241 04:57:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:16.241 04:57:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:16.241 04:57:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.241 04:57:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:16.500 [2024-11-18 04:57:39.908842] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:16.500 [2024-11-18 04:57:39.908907] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:17:16.500 04:57:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:16.500 04:57:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:16.500 04:57:40 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.500 04:57:40 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:16.759 04:57:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:16.759 04:57:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:16.759 04:57:40 -- bdev/bdev_raid.sh@287 -- # killprocess 74598 00:17:16.759 04:57:40 -- common/autotest_common.sh@936 -- # '[' -z 74598 ']' 00:17:16.759 04:57:40 -- common/autotest_common.sh@940 -- # kill -0 74598 00:17:16.759 04:57:40 -- common/autotest_common.sh@941 -- # uname 00:17:16.759 04:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:16.759 04:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74598 00:17:16.759 killing process with pid 74598 00:17:16.759 04:57:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:16.759 04:57:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:16.759 04:57:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74598' 00:17:16.759 04:57:40 -- common/autotest_common.sh@955 -- # kill 74598 00:17:16.760 [2024-11-18 04:57:40.238563] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.760 04:57:40 -- common/autotest_common.sh@960 -- # wait 74598 00:17:16.760 [2024-11-18 04:57:40.238701] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.139 ************************************ 00:17:18.139 END TEST raid_state_function_test 00:17:18.139 ************************************ 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:18.139 00:17:18.139 real 0m11.688s 00:17:18.139 user 0m19.566s 00:17:18.139 sys 0m1.699s 00:17:18.139 04:57:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:18.139 04:57:41 -- common/autotest_common.sh@10 -- # set +x 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:18.139 04:57:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:18.139 04:57:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:18.139 04:57:41 -- common/autotest_common.sh@10 -- # set +x 00:17:18.139 ************************************ 00:17:18.139 START TEST raid_state_function_test_sb 00:17:18.139 ************************************ 00:17:18.139 04:57:41 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 true 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:18.139 Process raid pid: 74992 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=74992 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 74992' 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 74992 /var/tmp/spdk-raid.sock 00:17:18.139 04:57:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:18.139 04:57:41 -- common/autotest_common.sh@829 -- # '[' -z 74992 ']' 00:17:18.139 04:57:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:18.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:18.139 04:57:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.139 04:57:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:18.139 04:57:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.139 04:57:41 -- common/autotest_common.sh@10 -- # set +x 00:17:18.139 [2024-11-18 04:57:41.443818] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.139 [2024-11-18 04:57:41.443977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.139 [2024-11-18 04:57:41.614631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.399 [2024-11-18 04:57:41.804104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.672 [2024-11-18 04:57:41.977585] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.933 04:57:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.933 04:57:42 -- common/autotest_common.sh@862 -- # return 0 00:17:18.933 04:57:42 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:19.193 [2024-11-18 04:57:42.626260] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.193 [2024-11-18 04:57:42.626343] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.193 [2024-11-18 04:57:42.626359] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.193 [2024-11-18 04:57:42.626374] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.193 [2024-11-18 04:57:42.626382] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.193 [2024-11-18 04:57:42.626394] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.193 [2024-11-18 04:57:42.626402] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:19.193 [2024-11-18 04:57:42.626413] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.193 04:57:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.453 04:57:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.453 "name": "Existed_Raid", 00:17:19.453 "uuid": "61e88b55-7eb4-411a-b779-584d1eb7bcab", 00:17:19.453 "strip_size_kb": 64, 00:17:19.453 "state": "configuring", 00:17:19.453 "raid_level": "raid0", 00:17:19.453 "superblock": true, 00:17:19.453 "num_base_bdevs": 4, 00:17:19.453 "num_base_bdevs_discovered": 0, 00:17:19.453 "num_base_bdevs_operational": 4, 00:17:19.453 "base_bdevs_list": [ 00:17:19.453 { 00:17:19.453 "name": "BaseBdev1", 00:17:19.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.453 "is_configured": false, 00:17:19.453 "data_offset": 0, 00:17:19.453 "data_size": 0 00:17:19.453 }, 00:17:19.453 { 00:17:19.453 "name": "BaseBdev2", 00:17:19.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.453 "is_configured": false, 00:17:19.453 "data_offset": 0, 00:17:19.453 "data_size": 0 00:17:19.453 }, 00:17:19.453 { 00:17:19.453 "name": "BaseBdev3", 00:17:19.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.453 "is_configured": false, 00:17:19.453 "data_offset": 0, 00:17:19.453 "data_size": 0 00:17:19.453 }, 00:17:19.453 { 00:17:19.453 "name": "BaseBdev4", 00:17:19.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.453 "is_configured": false, 00:17:19.453 "data_offset": 0, 00:17:19.453 "data_size": 0 00:17:19.453 } 00:17:19.453 ] 00:17:19.453 }' 00:17:19.453 04:57:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.453 04:57:42 -- common/autotest_common.sh@10 -- # set +x 00:17:19.712 04:57:43 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:19.971 [2024-11-18 04:57:43.338278] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.971 [2024-11-18 04:57:43.338338] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:19.971 04:57:43 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:20.231 [2024-11-18 04:57:43.562386] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.231 [2024-11-18 04:57:43.562461] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.231 [2024-11-18 04:57:43.562474] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.231 [2024-11-18 04:57:43.562488] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.231 [2024-11-18 04:57:43.562496] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.231 [2024-11-18 04:57:43.562508] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.231 [2024-11-18 04:57:43.562515] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:20.231 [2024-11-18 04:57:43.562526] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:20.231 04:57:43 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:20.490 [2024-11-18 04:57:43.842487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.490 BaseBdev1 00:17:20.490 04:57:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:20.490 04:57:43 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:20.490 04:57:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:20.490 04:57:43 -- common/autotest_common.sh@899 -- # local i 00:17:20.490 04:57:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:20.490 04:57:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:20.490 04:57:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.749 04:57:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.009 [ 00:17:21.009 { 00:17:21.009 "name": "BaseBdev1", 00:17:21.009 "aliases": [ 00:17:21.009 "7e029d1c-72a2-4b98-b245-07cc435032c8" 00:17:21.009 ], 00:17:21.009 "product_name": "Malloc disk", 00:17:21.009 "block_size": 512, 00:17:21.009 "num_blocks": 65536, 00:17:21.009 "uuid": "7e029d1c-72a2-4b98-b245-07cc435032c8", 00:17:21.009 "assigned_rate_limits": { 00:17:21.009 "rw_ios_per_sec": 0, 00:17:21.009 "rw_mbytes_per_sec": 0, 00:17:21.009 "r_mbytes_per_sec": 0, 00:17:21.009 "w_mbytes_per_sec": 0 00:17:21.009 }, 00:17:21.009 "claimed": true, 00:17:21.009 "claim_type": "exclusive_write", 00:17:21.009 "zoned": false, 00:17:21.009 "supported_io_types": { 00:17:21.009 "read": true, 00:17:21.009 "write": true, 00:17:21.009 "unmap": true, 00:17:21.009 "write_zeroes": true, 00:17:21.009 "flush": true, 00:17:21.009 "reset": true, 00:17:21.009 "compare": false, 00:17:21.009 "compare_and_write": false, 00:17:21.009 "abort": true, 00:17:21.009 "nvme_admin": false, 00:17:21.009 "nvme_io": false 00:17:21.009 }, 00:17:21.009 "memory_domains": [ 00:17:21.009 { 00:17:21.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.009 "dma_device_type": 2 00:17:21.009 } 00:17:21.009 ], 00:17:21.009 "driver_specific": {} 00:17:21.009 } 00:17:21.009 ] 00:17:21.009 04:57:44 -- common/autotest_common.sh@905 -- # return 0 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.009 04:57:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.009 "name": "Existed_Raid", 00:17:21.009 "uuid": "f5475dfb-39f2-489f-a269-28d4f63cd4d8", 00:17:21.009 "strip_size_kb": 64, 00:17:21.009 "state": "configuring", 00:17:21.009 "raid_level": "raid0", 00:17:21.009 "superblock": true, 00:17:21.009 "num_base_bdevs": 4, 00:17:21.009 "num_base_bdevs_discovered": 1, 00:17:21.009 "num_base_bdevs_operational": 4, 00:17:21.009 "base_bdevs_list": [ 00:17:21.009 { 00:17:21.009 "name": "BaseBdev1", 00:17:21.009 "uuid": "7e029d1c-72a2-4b98-b245-07cc435032c8", 00:17:21.009 "is_configured": true, 00:17:21.009 "data_offset": 2048, 00:17:21.009 "data_size": 63488 00:17:21.009 }, 00:17:21.009 { 00:17:21.009 "name": "BaseBdev2", 00:17:21.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.009 "is_configured": false, 00:17:21.009 "data_offset": 0, 00:17:21.009 "data_size": 0 00:17:21.009 }, 00:17:21.009 { 00:17:21.009 "name": "BaseBdev3", 00:17:21.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.009 "is_configured": false, 00:17:21.009 "data_offset": 0, 00:17:21.009 "data_size": 0 00:17:21.009 }, 00:17:21.009 { 00:17:21.009 "name": "BaseBdev4", 00:17:21.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.010 "is_configured": false, 00:17:21.010 "data_offset": 0, 00:17:21.010 "data_size": 0 00:17:21.010 } 00:17:21.010 ] 00:17:21.010 }' 00:17:21.010 04:57:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.010 04:57:44 -- common/autotest_common.sh@10 -- # set +x 00:17:21.268 04:57:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:21.526 [2024-11-18 04:57:44.950900] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:21.526 [2024-11-18 04:57:44.951181] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:21.526 04:57:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:21.526 04:57:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:21.785 04:57:45 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:22.044 BaseBdev1 00:17:22.044 04:57:45 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:22.044 04:57:45 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:22.044 04:57:45 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:22.044 04:57:45 -- common/autotest_common.sh@899 -- # local i 00:17:22.044 04:57:45 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:22.044 04:57:45 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:22.044 04:57:45 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:22.303 04:57:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:22.563 [ 00:17:22.563 { 00:17:22.563 "name": "BaseBdev1", 00:17:22.563 "aliases": [ 00:17:22.563 "33f234bb-507c-4d70-b102-6fecbb41c441" 00:17:22.563 ], 00:17:22.563 "product_name": "Malloc disk", 00:17:22.563 "block_size": 512, 00:17:22.563 "num_blocks": 65536, 00:17:22.563 "uuid": "33f234bb-507c-4d70-b102-6fecbb41c441", 00:17:22.563 "assigned_rate_limits": { 00:17:22.563 "rw_ios_per_sec": 0, 00:17:22.563 "rw_mbytes_per_sec": 0, 00:17:22.563 "r_mbytes_per_sec": 0, 00:17:22.563 "w_mbytes_per_sec": 0 00:17:22.563 }, 00:17:22.563 "claimed": false, 00:17:22.563 "zoned": false, 00:17:22.563 "supported_io_types": { 00:17:22.563 "read": true, 00:17:22.563 "write": true, 00:17:22.563 "unmap": true, 00:17:22.563 "write_zeroes": true, 00:17:22.563 "flush": true, 00:17:22.563 "reset": true, 00:17:22.563 "compare": false, 00:17:22.563 "compare_and_write": false, 00:17:22.563 "abort": true, 00:17:22.563 "nvme_admin": false, 00:17:22.563 "nvme_io": false 00:17:22.563 }, 00:17:22.563 "memory_domains": [ 00:17:22.563 { 00:17:22.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.563 "dma_device_type": 2 00:17:22.563 } 00:17:22.563 ], 00:17:22.563 "driver_specific": {} 00:17:22.563 } 00:17:22.563 ] 00:17:22.563 04:57:45 -- common/autotest_common.sh@905 -- # return 0 00:17:22.563 04:57:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:22.563 [2024-11-18 04:57:46.062108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.563 [2024-11-18 04:57:46.064235] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.563 [2024-11-18 04:57:46.064449] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.563 [2024-11-18 04:57:46.064477] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.563 [2024-11-18 04:57:46.064494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.563 [2024-11-18 04:57:46.064503] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:22.563 [2024-11-18 04:57:46.064518] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.563 04:57:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.822 04:57:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.822 04:57:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.822 04:57:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.823 "name": "Existed_Raid", 00:17:22.823 "uuid": "1082bc4b-7da0-4c50-8f31-e7352357fce4", 00:17:22.823 "strip_size_kb": 64, 00:17:22.823 "state": "configuring", 00:17:22.823 "raid_level": "raid0", 00:17:22.823 "superblock": true, 00:17:22.823 "num_base_bdevs": 4, 00:17:22.823 "num_base_bdevs_discovered": 1, 00:17:22.823 "num_base_bdevs_operational": 4, 00:17:22.823 "base_bdevs_list": [ 00:17:22.823 { 00:17:22.823 "name": "BaseBdev1", 00:17:22.823 "uuid": "33f234bb-507c-4d70-b102-6fecbb41c441", 00:17:22.823 "is_configured": true, 00:17:22.823 "data_offset": 2048, 00:17:22.823 "data_size": 63488 00:17:22.823 }, 00:17:22.823 { 00:17:22.823 "name": "BaseBdev2", 00:17:22.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.823 "is_configured": false, 00:17:22.823 "data_offset": 0, 00:17:22.823 "data_size": 0 00:17:22.823 }, 00:17:22.823 { 00:17:22.823 "name": "BaseBdev3", 00:17:22.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.823 "is_configured": false, 00:17:22.823 "data_offset": 0, 00:17:22.823 "data_size": 0 00:17:22.823 }, 00:17:22.823 { 00:17:22.823 "name": "BaseBdev4", 00:17:22.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.823 "is_configured": false, 00:17:22.823 "data_offset": 0, 00:17:22.823 "data_size": 0 00:17:22.823 } 00:17:22.823 ] 00:17:22.823 }' 00:17:22.823 04:57:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.823 04:57:46 -- common/autotest_common.sh@10 -- # set +x 00:17:23.083 04:57:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:23.342 [2024-11-18 04:57:46.780740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.342 BaseBdev2 00:17:23.342 04:57:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:23.342 04:57:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:23.342 04:57:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:23.342 04:57:46 -- common/autotest_common.sh@899 -- # local i 00:17:23.342 04:57:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:23.342 04:57:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:23.342 04:57:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.600 04:57:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:23.859 [ 00:17:23.859 { 00:17:23.859 "name": "BaseBdev2", 00:17:23.859 "aliases": [ 00:17:23.859 "a27d51c3-91e8-44ad-9d9b-66361c335740" 00:17:23.859 ], 00:17:23.859 "product_name": "Malloc disk", 00:17:23.859 "block_size": 512, 00:17:23.859 "num_blocks": 65536, 00:17:23.859 "uuid": "a27d51c3-91e8-44ad-9d9b-66361c335740", 00:17:23.859 "assigned_rate_limits": { 00:17:23.859 "rw_ios_per_sec": 0, 00:17:23.859 "rw_mbytes_per_sec": 0, 00:17:23.859 "r_mbytes_per_sec": 0, 00:17:23.859 "w_mbytes_per_sec": 0 00:17:23.859 }, 00:17:23.859 "claimed": true, 00:17:23.859 "claim_type": "exclusive_write", 00:17:23.859 "zoned": false, 00:17:23.859 "supported_io_types": { 00:17:23.859 "read": true, 00:17:23.859 "write": true, 00:17:23.859 "unmap": true, 00:17:23.859 "write_zeroes": true, 00:17:23.859 "flush": true, 00:17:23.859 "reset": true, 00:17:23.859 "compare": false, 00:17:23.859 "compare_and_write": false, 00:17:23.859 "abort": true, 00:17:23.859 "nvme_admin": false, 00:17:23.859 "nvme_io": false 00:17:23.859 }, 00:17:23.859 "memory_domains": [ 00:17:23.859 { 00:17:23.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.859 "dma_device_type": 2 00:17:23.859 } 00:17:23.859 ], 00:17:23.859 "driver_specific": {} 00:17:23.859 } 00:17:23.859 ] 00:17:23.859 04:57:47 -- common/autotest_common.sh@905 -- # return 0 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.859 04:57:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.119 04:57:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.119 "name": "Existed_Raid", 00:17:24.119 "uuid": "1082bc4b-7da0-4c50-8f31-e7352357fce4", 00:17:24.120 "strip_size_kb": 64, 00:17:24.120 "state": "configuring", 00:17:24.120 "raid_level": "raid0", 00:17:24.120 "superblock": true, 00:17:24.120 "num_base_bdevs": 4, 00:17:24.120 "num_base_bdevs_discovered": 2, 00:17:24.120 "num_base_bdevs_operational": 4, 00:17:24.120 "base_bdevs_list": [ 00:17:24.120 { 00:17:24.120 "name": "BaseBdev1", 00:17:24.120 "uuid": "33f234bb-507c-4d70-b102-6fecbb41c441", 00:17:24.120 "is_configured": true, 00:17:24.120 "data_offset": 2048, 00:17:24.120 "data_size": 63488 00:17:24.120 }, 00:17:24.120 { 00:17:24.120 "name": "BaseBdev2", 00:17:24.120 "uuid": "a27d51c3-91e8-44ad-9d9b-66361c335740", 00:17:24.120 "is_configured": true, 00:17:24.120 "data_offset": 2048, 00:17:24.120 "data_size": 63488 00:17:24.120 }, 00:17:24.120 { 00:17:24.120 "name": "BaseBdev3", 00:17:24.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.120 "is_configured": false, 00:17:24.120 "data_offset": 0, 00:17:24.120 "data_size": 0 00:17:24.120 }, 00:17:24.120 { 00:17:24.120 "name": "BaseBdev4", 00:17:24.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.120 "is_configured": false, 00:17:24.120 "data_offset": 0, 00:17:24.120 "data_size": 0 00:17:24.120 } 00:17:24.120 ] 00:17:24.120 }' 00:17:24.120 04:57:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.120 04:57:47 -- common/autotest_common.sh@10 -- # set +x 00:17:24.380 04:57:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:24.639 [2024-11-18 04:57:48.007833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.639 BaseBdev3 00:17:24.639 04:57:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:24.639 04:57:48 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:24.639 04:57:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:24.639 04:57:48 -- common/autotest_common.sh@899 -- # local i 00:17:24.639 04:57:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:24.639 04:57:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:24.639 04:57:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.898 04:57:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:25.156 [ 00:17:25.156 { 00:17:25.156 "name": "BaseBdev3", 00:17:25.156 "aliases": [ 00:17:25.156 "5d2dd382-e2b4-416f-b80f-3ba5396d2f3c" 00:17:25.156 ], 00:17:25.156 "product_name": "Malloc disk", 00:17:25.156 "block_size": 512, 00:17:25.156 "num_blocks": 65536, 00:17:25.156 "uuid": "5d2dd382-e2b4-416f-b80f-3ba5396d2f3c", 00:17:25.156 "assigned_rate_limits": { 00:17:25.156 "rw_ios_per_sec": 0, 00:17:25.156 "rw_mbytes_per_sec": 0, 00:17:25.156 "r_mbytes_per_sec": 0, 00:17:25.156 "w_mbytes_per_sec": 0 00:17:25.156 }, 00:17:25.156 "claimed": true, 00:17:25.156 "claim_type": "exclusive_write", 00:17:25.156 "zoned": false, 00:17:25.156 "supported_io_types": { 00:17:25.156 "read": true, 00:17:25.156 "write": true, 00:17:25.156 "unmap": true, 00:17:25.156 "write_zeroes": true, 00:17:25.156 "flush": true, 00:17:25.156 "reset": true, 00:17:25.156 "compare": false, 00:17:25.156 "compare_and_write": false, 00:17:25.156 "abort": true, 00:17:25.156 "nvme_admin": false, 00:17:25.156 "nvme_io": false 00:17:25.156 }, 00:17:25.156 "memory_domains": [ 00:17:25.156 { 00:17:25.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.156 "dma_device_type": 2 00:17:25.156 } 00:17:25.156 ], 00:17:25.156 "driver_specific": {} 00:17:25.156 } 00:17:25.156 ] 00:17:25.156 04:57:48 -- common/autotest_common.sh@905 -- # return 0 00:17:25.156 04:57:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:25.156 04:57:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.157 04:57:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.415 04:57:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.415 "name": "Existed_Raid", 00:17:25.415 "uuid": "1082bc4b-7da0-4c50-8f31-e7352357fce4", 00:17:25.415 "strip_size_kb": 64, 00:17:25.415 "state": "configuring", 00:17:25.415 "raid_level": "raid0", 00:17:25.415 "superblock": true, 00:17:25.415 "num_base_bdevs": 4, 00:17:25.415 "num_base_bdevs_discovered": 3, 00:17:25.415 "num_base_bdevs_operational": 4, 00:17:25.415 "base_bdevs_list": [ 00:17:25.415 { 00:17:25.415 "name": "BaseBdev1", 00:17:25.415 "uuid": "33f234bb-507c-4d70-b102-6fecbb41c441", 00:17:25.415 "is_configured": true, 00:17:25.415 "data_offset": 2048, 00:17:25.415 "data_size": 63488 00:17:25.415 }, 00:17:25.415 { 00:17:25.415 "name": "BaseBdev2", 00:17:25.415 "uuid": "a27d51c3-91e8-44ad-9d9b-66361c335740", 00:17:25.415 "is_configured": true, 00:17:25.415 "data_offset": 2048, 00:17:25.415 "data_size": 63488 00:17:25.415 }, 00:17:25.415 { 00:17:25.415 "name": "BaseBdev3", 00:17:25.415 "uuid": "5d2dd382-e2b4-416f-b80f-3ba5396d2f3c", 00:17:25.415 "is_configured": true, 00:17:25.415 "data_offset": 2048, 00:17:25.415 "data_size": 63488 00:17:25.415 }, 00:17:25.415 { 00:17:25.415 "name": "BaseBdev4", 00:17:25.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.415 "is_configured": false, 00:17:25.415 "data_offset": 0, 00:17:25.415 "data_size": 0 00:17:25.415 } 00:17:25.415 ] 00:17:25.415 }' 00:17:25.415 04:57:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.415 04:57:48 -- common/autotest_common.sh@10 -- # set +x 00:17:25.675 04:57:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:25.937 [2024-11-18 04:57:49.228424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:25.937 [2024-11-18 04:57:49.228868] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:17:25.937 [2024-11-18 04:57:49.229004] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:25.937 [2024-11-18 04:57:49.229162] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:25.937 [2024-11-18 04:57:49.229586] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:17:25.937 BaseBdev4 00:17:25.937 [2024-11-18 04:57:49.229758] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:17:25.937 [2024-11-18 04:57:49.230044] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.937 04:57:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:25.937 04:57:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:25.937 04:57:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:25.938 04:57:49 -- common/autotest_common.sh@899 -- # local i 00:17:25.938 04:57:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:25.938 04:57:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:25.938 04:57:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.938 04:57:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:26.202 [ 00:17:26.202 { 00:17:26.202 "name": "BaseBdev4", 00:17:26.202 "aliases": [ 00:17:26.202 "8b02863f-30d3-4eb3-8e3f-3d249c579b4a" 00:17:26.202 ], 00:17:26.203 "product_name": "Malloc disk", 00:17:26.203 "block_size": 512, 00:17:26.203 "num_blocks": 65536, 00:17:26.203 "uuid": "8b02863f-30d3-4eb3-8e3f-3d249c579b4a", 00:17:26.203 "assigned_rate_limits": { 00:17:26.203 "rw_ios_per_sec": 0, 00:17:26.203 "rw_mbytes_per_sec": 0, 00:17:26.203 "r_mbytes_per_sec": 0, 00:17:26.203 "w_mbytes_per_sec": 0 00:17:26.203 }, 00:17:26.203 "claimed": true, 00:17:26.203 "claim_type": "exclusive_write", 00:17:26.203 "zoned": false, 00:17:26.203 "supported_io_types": { 00:17:26.203 "read": true, 00:17:26.203 "write": true, 00:17:26.203 "unmap": true, 00:17:26.203 "write_zeroes": true, 00:17:26.203 "flush": true, 00:17:26.203 "reset": true, 00:17:26.203 "compare": false, 00:17:26.203 "compare_and_write": false, 00:17:26.203 "abort": true, 00:17:26.203 "nvme_admin": false, 00:17:26.203 "nvme_io": false 00:17:26.203 }, 00:17:26.203 "memory_domains": [ 00:17:26.203 { 00:17:26.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.203 "dma_device_type": 2 00:17:26.203 } 00:17:26.203 ], 00:17:26.203 "driver_specific": {} 00:17:26.203 } 00:17:26.203 ] 00:17:26.203 04:57:49 -- common/autotest_common.sh@905 -- # return 0 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.203 04:57:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.462 04:57:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.462 "name": "Existed_Raid", 00:17:26.462 "uuid": "1082bc4b-7da0-4c50-8f31-e7352357fce4", 00:17:26.462 "strip_size_kb": 64, 00:17:26.462 "state": "online", 00:17:26.462 "raid_level": "raid0", 00:17:26.462 "superblock": true, 00:17:26.462 "num_base_bdevs": 4, 00:17:26.462 "num_base_bdevs_discovered": 4, 00:17:26.462 "num_base_bdevs_operational": 4, 00:17:26.462 "base_bdevs_list": [ 00:17:26.462 { 00:17:26.462 "name": "BaseBdev1", 00:17:26.462 "uuid": "33f234bb-507c-4d70-b102-6fecbb41c441", 00:17:26.462 "is_configured": true, 00:17:26.462 "data_offset": 2048, 00:17:26.462 "data_size": 63488 00:17:26.462 }, 00:17:26.462 { 00:17:26.462 "name": "BaseBdev2", 00:17:26.462 "uuid": "a27d51c3-91e8-44ad-9d9b-66361c335740", 00:17:26.462 "is_configured": true, 00:17:26.462 "data_offset": 2048, 00:17:26.462 "data_size": 63488 00:17:26.462 }, 00:17:26.462 { 00:17:26.462 "name": "BaseBdev3", 00:17:26.462 "uuid": "5d2dd382-e2b4-416f-b80f-3ba5396d2f3c", 00:17:26.462 "is_configured": true, 00:17:26.462 "data_offset": 2048, 00:17:26.462 "data_size": 63488 00:17:26.462 }, 00:17:26.462 { 00:17:26.462 "name": "BaseBdev4", 00:17:26.462 "uuid": "8b02863f-30d3-4eb3-8e3f-3d249c579b4a", 00:17:26.462 "is_configured": true, 00:17:26.462 "data_offset": 2048, 00:17:26.462 "data_size": 63488 00:17:26.462 } 00:17:26.462 ] 00:17:26.462 }' 00:17:26.462 04:57:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.462 04:57:49 -- common/autotest_common.sh@10 -- # set +x 00:17:26.721 04:57:50 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:26.979 [2024-11-18 04:57:50.396876] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.979 [2024-11-18 04:57:50.397086] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.979 [2024-11-18 04:57:50.397305] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.979 04:57:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.237 04:57:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.237 "name": "Existed_Raid", 00:17:27.237 "uuid": "1082bc4b-7da0-4c50-8f31-e7352357fce4", 00:17:27.237 "strip_size_kb": 64, 00:17:27.237 "state": "offline", 00:17:27.237 "raid_level": "raid0", 00:17:27.237 "superblock": true, 00:17:27.237 "num_base_bdevs": 4, 00:17:27.237 "num_base_bdevs_discovered": 3, 00:17:27.237 "num_base_bdevs_operational": 3, 00:17:27.237 "base_bdevs_list": [ 00:17:27.237 { 00:17:27.237 "name": null, 00:17:27.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.237 "is_configured": false, 00:17:27.237 "data_offset": 2048, 00:17:27.237 "data_size": 63488 00:17:27.237 }, 00:17:27.237 { 00:17:27.237 "name": "BaseBdev2", 00:17:27.237 "uuid": "a27d51c3-91e8-44ad-9d9b-66361c335740", 00:17:27.237 "is_configured": true, 00:17:27.237 "data_offset": 2048, 00:17:27.237 "data_size": 63488 00:17:27.237 }, 00:17:27.237 { 00:17:27.237 "name": "BaseBdev3", 00:17:27.237 "uuid": "5d2dd382-e2b4-416f-b80f-3ba5396d2f3c", 00:17:27.237 "is_configured": true, 00:17:27.237 "data_offset": 2048, 00:17:27.237 "data_size": 63488 00:17:27.237 }, 00:17:27.237 { 00:17:27.237 "name": "BaseBdev4", 00:17:27.237 "uuid": "8b02863f-30d3-4eb3-8e3f-3d249c579b4a", 00:17:27.237 "is_configured": true, 00:17:27.237 "data_offset": 2048, 00:17:27.237 "data_size": 63488 00:17:27.237 } 00:17:27.237 ] 00:17:27.237 }' 00:17:27.237 04:57:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.237 04:57:50 -- common/autotest_common.sh@10 -- # set +x 00:17:27.804 04:57:51 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:27.804 04:57:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.804 04:57:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.804 04:57:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.804 04:57:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.804 04:57:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.804 04:57:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:28.063 [2024-11-18 04:57:51.468953] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.063 04:57:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:28.063 04:57:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:28.063 04:57:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:28.063 04:57:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.322 04:57:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:28.322 04:57:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.322 04:57:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:28.581 [2024-11-18 04:57:51.986702] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:28.581 04:57:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:28.581 04:57:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:28.581 04:57:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.581 04:57:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:28.840 04:57:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:28.840 04:57:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.840 04:57:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:29.099 [2024-11-18 04:57:52.544476] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:29.099 [2024-11-18 04:57:52.544540] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:17:29.358 04:57:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:29.358 04:57:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:29.358 04:57:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.358 04:57:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:29.617 04:57:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:29.617 04:57:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:29.617 04:57:52 -- bdev/bdev_raid.sh@287 -- # killprocess 74992 00:17:29.617 04:57:52 -- common/autotest_common.sh@936 -- # '[' -z 74992 ']' 00:17:29.617 04:57:52 -- common/autotest_common.sh@940 -- # kill -0 74992 00:17:29.617 04:57:52 -- common/autotest_common.sh@941 -- # uname 00:17:29.617 04:57:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.617 04:57:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74992 00:17:29.617 killing process with pid 74992 00:17:29.617 04:57:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:29.617 04:57:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:29.617 04:57:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74992' 00:17:29.617 04:57:52 -- common/autotest_common.sh@955 -- # kill 74992 00:17:29.617 [2024-11-18 04:57:52.916968] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.617 04:57:52 -- common/autotest_common.sh@960 -- # wait 74992 00:17:29.617 [2024-11-18 04:57:52.917085] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.554 04:57:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:30.554 00:17:30.554 real 0m12.590s 00:17:30.554 user 0m21.164s 00:17:30.554 sys 0m1.798s 00:17:30.554 04:57:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:30.554 04:57:53 -- common/autotest_common.sh@10 -- # set +x 00:17:30.554 ************************************ 00:17:30.554 END TEST raid_state_function_test_sb 00:17:30.554 ************************************ 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:30.554 04:57:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:30.554 04:57:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:30.554 04:57:54 -- common/autotest_common.sh@10 -- # set +x 00:17:30.554 ************************************ 00:17:30.554 START TEST raid_superblock_test 00:17:30.554 ************************************ 00:17:30.554 04:57:54 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 4 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=75395 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 75395 /var/tmp/spdk-raid.sock 00:17:30.554 04:57:54 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:30.554 04:57:54 -- common/autotest_common.sh@829 -- # '[' -z 75395 ']' 00:17:30.554 04:57:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:30.554 04:57:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.554 04:57:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:30.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:30.554 04:57:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.554 04:57:54 -- common/autotest_common.sh@10 -- # set +x 00:17:30.813 [2024-11-18 04:57:54.079464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:30.813 [2024-11-18 04:57:54.079804] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75395 ] 00:17:30.813 [2024-11-18 04:57:54.250803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.072 [2024-11-18 04:57:54.426425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.072 [2024-11-18 04:57:54.585661] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.641 04:57:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.641 04:57:55 -- common/autotest_common.sh@862 -- # return 0 00:17:31.641 04:57:55 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:31.641 04:57:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.641 04:57:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:31.641 04:57:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:31.641 04:57:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:31.641 04:57:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.641 04:57:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.641 04:57:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.641 04:57:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:31.900 malloc1 00:17:31.900 04:57:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:32.158 [2024-11-18 04:57:55.471245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:32.158 [2024-11-18 04:57:55.471565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.158 [2024-11-18 04:57:55.471619] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:17:32.158 [2024-11-18 04:57:55.471635] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.158 [2024-11-18 04:57:55.474054] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.159 [2024-11-18 04:57:55.474097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:32.159 pt1 00:17:32.159 04:57:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:32.159 04:57:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:32.159 04:57:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:32.159 04:57:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:32.159 04:57:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:32.159 04:57:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.159 04:57:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.159 04:57:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.159 04:57:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:32.417 malloc2 00:17:32.417 04:57:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:32.676 [2024-11-18 04:57:55.946510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:32.676 [2024-11-18 04:57:55.946634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.676 [2024-11-18 04:57:55.946673] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:17:32.676 [2024-11-18 04:57:55.946688] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.676 [2024-11-18 04:57:55.949275] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.676 [2024-11-18 04:57:55.949319] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:32.676 pt2 00:17:32.676 04:57:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:32.676 04:57:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:32.676 04:57:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:32.676 04:57:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:32.676 04:57:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:32.676 04:57:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.676 04:57:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.676 04:57:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.676 04:57:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:32.676 malloc3 00:17:32.936 04:57:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:32.936 [2024-11-18 04:57:56.433988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:32.936 [2024-11-18 04:57:56.434095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.936 [2024-11-18 04:57:56.434131] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:17:32.936 [2024-11-18 04:57:56.434146] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.936 [2024-11-18 04:57:56.436783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.936 [2024-11-18 04:57:56.436826] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:32.936 pt3 00:17:32.936 04:57:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:32.936 04:57:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:32.936 04:57:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:32.936 04:57:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:32.936 04:57:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:32.936 04:57:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.936 04:57:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.936 04:57:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.936 04:57:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:33.194 malloc4 00:17:33.194 04:57:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:33.453 [2024-11-18 04:57:56.878826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:33.453 [2024-11-18 04:57:56.878918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.453 [2024-11-18 04:57:56.878958] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:17:33.453 [2024-11-18 04:57:56.878973] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.453 [2024-11-18 04:57:56.882303] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.453 [2024-11-18 04:57:56.882349] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:33.453 pt4 00:17:33.453 04:57:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:33.453 04:57:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:33.453 04:57:56 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:33.714 [2024-11-18 04:57:57.083113] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.714 [2024-11-18 04:57:57.085287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.714 [2024-11-18 04:57:57.085417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:33.714 [2024-11-18 04:57:57.085494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:33.714 [2024-11-18 04:57:57.085758] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:17:33.714 [2024-11-18 04:57:57.085781] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:33.714 [2024-11-18 04:57:57.085928] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:33.714 [2024-11-18 04:57:57.086339] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:17:33.714 [2024-11-18 04:57:57.086366] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:17:33.714 [2024-11-18 04:57:57.086522] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.714 04:57:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.982 04:57:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.982 "name": "raid_bdev1", 00:17:33.982 "uuid": "f6cb7a40-aa39-4316-aae8-31b4b27f8f0a", 00:17:33.982 "strip_size_kb": 64, 00:17:33.982 "state": "online", 00:17:33.982 "raid_level": "raid0", 00:17:33.982 "superblock": true, 00:17:33.982 "num_base_bdevs": 4, 00:17:33.982 "num_base_bdevs_discovered": 4, 00:17:33.982 "num_base_bdevs_operational": 4, 00:17:33.982 "base_bdevs_list": [ 00:17:33.982 { 00:17:33.982 "name": "pt1", 00:17:33.982 "uuid": "e06afd51-3d08-5ab8-9485-c2b4458006a6", 00:17:33.982 "is_configured": true, 00:17:33.982 "data_offset": 2048, 00:17:33.982 "data_size": 63488 00:17:33.982 }, 00:17:33.982 { 00:17:33.982 "name": "pt2", 00:17:33.982 "uuid": "000ea3e4-a524-5b24-b521-c92753f7130d", 00:17:33.982 "is_configured": true, 00:17:33.982 "data_offset": 2048, 00:17:33.982 "data_size": 63488 00:17:33.982 }, 00:17:33.982 { 00:17:33.982 "name": "pt3", 00:17:33.982 "uuid": "d5e19606-a5d2-5292-a2a0-ed1cedc42642", 00:17:33.982 "is_configured": true, 00:17:33.982 "data_offset": 2048, 00:17:33.982 "data_size": 63488 00:17:33.982 }, 00:17:33.982 { 00:17:33.982 "name": "pt4", 00:17:33.982 "uuid": "6ec9b97a-00a8-59c7-9732-8826de2b7e25", 00:17:33.982 "is_configured": true, 00:17:33.982 "data_offset": 2048, 00:17:33.982 "data_size": 63488 00:17:33.982 } 00:17:33.982 ] 00:17:33.982 }' 00:17:33.982 04:57:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.982 04:57:57 -- common/autotest_common.sh@10 -- # set +x 00:17:34.241 04:57:57 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:34.241 04:57:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:34.499 [2024-11-18 04:57:57.859600] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.499 04:57:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f6cb7a40-aa39-4316-aae8-31b4b27f8f0a 00:17:34.499 04:57:57 -- bdev/bdev_raid.sh@380 -- # '[' -z f6cb7a40-aa39-4316-aae8-31b4b27f8f0a ']' 00:17:34.499 04:57:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:34.758 [2024-11-18 04:57:58.067309] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.758 [2024-11-18 04:57:58.067587] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.758 [2024-11-18 04:57:58.067788] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.758 [2024-11-18 04:57:58.067884] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.758 [2024-11-18 04:57:58.067901] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:17:34.758 04:57:58 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:34.758 04:57:58 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.016 04:57:58 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:35.016 04:57:58 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:35.016 04:57:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:35.016 04:57:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:35.275 04:57:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:35.275 04:57:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:35.275 04:57:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:35.275 04:57:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:35.534 04:57:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:35.534 04:57:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:35.792 04:57:59 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:35.793 04:57:59 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:36.051 04:57:59 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:36.051 04:57:59 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:36.051 04:57:59 -- common/autotest_common.sh@650 -- # local es=0 00:17:36.051 04:57:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:36.051 04:57:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.051 04:57:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.051 04:57:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.051 04:57:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.051 04:57:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.051 04:57:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.051 04:57:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.051 04:57:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:36.051 04:57:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:36.309 [2024-11-18 04:57:59.627804] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:36.309 [2024-11-18 04:57:59.629826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:36.309 [2024-11-18 04:57:59.629888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:36.309 [2024-11-18 04:57:59.629932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:36.309 [2024-11-18 04:57:59.629993] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:36.309 [2024-11-18 04:57:59.630068] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:36.309 [2024-11-18 04:57:59.630101] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:36.309 [2024-11-18 04:57:59.630127] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:36.309 [2024-11-18 04:57:59.630148] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.309 [2024-11-18 04:57:59.630163] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:17:36.309 request: 00:17:36.309 { 00:17:36.309 "name": "raid_bdev1", 00:17:36.309 "raid_level": "raid0", 00:17:36.309 "base_bdevs": [ 00:17:36.309 "malloc1", 00:17:36.309 "malloc2", 00:17:36.309 "malloc3", 00:17:36.309 "malloc4" 00:17:36.309 ], 00:17:36.309 "superblock": false, 00:17:36.309 "strip_size_kb": 64, 00:17:36.309 "method": "bdev_raid_create", 00:17:36.309 "req_id": 1 00:17:36.309 } 00:17:36.309 Got JSON-RPC error response 00:17:36.309 response: 00:17:36.309 { 00:17:36.309 "code": -17, 00:17:36.309 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:36.309 } 00:17:36.309 04:57:59 -- common/autotest_common.sh@653 -- # es=1 00:17:36.309 04:57:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:36.309 04:57:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:36.309 04:57:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:36.309 04:57:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:36.309 04:57:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.568 04:57:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:36.568 04:57:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:36.568 04:57:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.568 [2024-11-18 04:58:00.039817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.568 [2024-11-18 04:58:00.039909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.568 [2024-11-18 04:58:00.039941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:17:36.568 [2024-11-18 04:58:00.039956] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.568 [2024-11-18 04:58:00.042252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.568 [2024-11-18 04:58:00.042293] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.568 [2024-11-18 04:58:00.042393] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:36.568 [2024-11-18 04:58:00.042455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.568 pt1 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.568 04:58:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.826 04:58:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.827 "name": "raid_bdev1", 00:17:36.827 "uuid": "f6cb7a40-aa39-4316-aae8-31b4b27f8f0a", 00:17:36.827 "strip_size_kb": 64, 00:17:36.827 "state": "configuring", 00:17:36.827 "raid_level": "raid0", 00:17:36.827 "superblock": true, 00:17:36.827 "num_base_bdevs": 4, 00:17:36.827 "num_base_bdevs_discovered": 1, 00:17:36.827 "num_base_bdevs_operational": 4, 00:17:36.827 "base_bdevs_list": [ 00:17:36.827 { 00:17:36.827 "name": "pt1", 00:17:36.827 "uuid": "e06afd51-3d08-5ab8-9485-c2b4458006a6", 00:17:36.827 "is_configured": true, 00:17:36.827 "data_offset": 2048, 00:17:36.827 "data_size": 63488 00:17:36.827 }, 00:17:36.827 { 00:17:36.827 "name": null, 00:17:36.827 "uuid": "000ea3e4-a524-5b24-b521-c92753f7130d", 00:17:36.827 "is_configured": false, 00:17:36.827 "data_offset": 2048, 00:17:36.827 "data_size": 63488 00:17:36.827 }, 00:17:36.827 { 00:17:36.827 "name": null, 00:17:36.827 "uuid": "d5e19606-a5d2-5292-a2a0-ed1cedc42642", 00:17:36.827 "is_configured": false, 00:17:36.827 "data_offset": 2048, 00:17:36.827 "data_size": 63488 00:17:36.827 }, 00:17:36.827 { 00:17:36.827 "name": null, 00:17:36.827 "uuid": "6ec9b97a-00a8-59c7-9732-8826de2b7e25", 00:17:36.827 "is_configured": false, 00:17:36.827 "data_offset": 2048, 00:17:36.827 "data_size": 63488 00:17:36.827 } 00:17:36.827 ] 00:17:36.827 }' 00:17:36.827 04:58:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.827 04:58:00 -- common/autotest_common.sh@10 -- # set +x 00:17:37.085 04:58:00 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:37.085 04:58:00 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:37.343 [2024-11-18 04:58:00.771980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:37.343 [2024-11-18 04:58:00.772231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.343 [2024-11-18 04:58:00.772282] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:17:37.343 [2024-11-18 04:58:00.772298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.343 [2024-11-18 04:58:00.772780] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.343 [2024-11-18 04:58:00.772804] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:37.343 [2024-11-18 04:58:00.772901] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:37.344 [2024-11-18 04:58:00.772928] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.344 pt2 00:17:37.344 04:58:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:37.602 [2024-11-18 04:58:01.024035] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.602 04:58:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.861 04:58:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.861 "name": "raid_bdev1", 00:17:37.861 "uuid": "f6cb7a40-aa39-4316-aae8-31b4b27f8f0a", 00:17:37.861 "strip_size_kb": 64, 00:17:37.861 "state": "configuring", 00:17:37.861 "raid_level": "raid0", 00:17:37.861 "superblock": true, 00:17:37.861 "num_base_bdevs": 4, 00:17:37.861 "num_base_bdevs_discovered": 1, 00:17:37.861 "num_base_bdevs_operational": 4, 00:17:37.861 "base_bdevs_list": [ 00:17:37.861 { 00:17:37.861 "name": "pt1", 00:17:37.861 "uuid": "e06afd51-3d08-5ab8-9485-c2b4458006a6", 00:17:37.861 "is_configured": true, 00:17:37.861 "data_offset": 2048, 00:17:37.861 "data_size": 63488 00:17:37.861 }, 00:17:37.861 { 00:17:37.861 "name": null, 00:17:37.861 "uuid": "000ea3e4-a524-5b24-b521-c92753f7130d", 00:17:37.861 "is_configured": false, 00:17:37.861 "data_offset": 2048, 00:17:37.861 "data_size": 63488 00:17:37.861 }, 00:17:37.861 { 00:17:37.861 "name": null, 00:17:37.861 "uuid": "d5e19606-a5d2-5292-a2a0-ed1cedc42642", 00:17:37.861 "is_configured": false, 00:17:37.861 "data_offset": 2048, 00:17:37.861 "data_size": 63488 00:17:37.861 }, 00:17:37.861 { 00:17:37.861 "name": null, 00:17:37.861 "uuid": "6ec9b97a-00a8-59c7-9732-8826de2b7e25", 00:17:37.861 "is_configured": false, 00:17:37.861 "data_offset": 2048, 00:17:37.861 "data_size": 63488 00:17:37.861 } 00:17:37.861 ] 00:17:37.861 }' 00:17:37.861 04:58:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.861 04:58:01 -- common/autotest_common.sh@10 -- # set +x 00:17:38.119 04:58:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:38.119 04:58:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:38.119 04:58:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:38.378 [2024-11-18 04:58:01.716261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:38.378 [2024-11-18 04:58:01.716365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.378 [2024-11-18 04:58:01.716396] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:17:38.378 [2024-11-18 04:58:01.716413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.378 [2024-11-18 04:58:01.717003] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.378 [2024-11-18 04:58:01.717040] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:38.378 [2024-11-18 04:58:01.717139] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:38.378 [2024-11-18 04:58:01.717174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:38.378 pt2 00:17:38.378 04:58:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:38.378 04:58:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:38.378 04:58:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:38.636 [2024-11-18 04:58:01.972336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:38.636 [2024-11-18 04:58:01.972421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.636 [2024-11-18 04:58:01.972450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:17:38.636 [2024-11-18 04:58:01.972467] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.636 [2024-11-18 04:58:01.972894] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.636 [2024-11-18 04:58:01.972921] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:38.636 [2024-11-18 04:58:01.973008] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:38.636 [2024-11-18 04:58:01.973045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:38.636 pt3 00:17:38.636 04:58:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:38.636 04:58:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:38.636 04:58:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:38.895 [2024-11-18 04:58:02.168343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:38.895 [2024-11-18 04:58:02.168413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.895 [2024-11-18 04:58:02.168439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:17:38.895 [2024-11-18 04:58:02.168454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.895 [2024-11-18 04:58:02.168829] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.895 [2024-11-18 04:58:02.168856] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:38.895 [2024-11-18 04:58:02.168934] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:38.895 [2024-11-18 04:58:02.168964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:38.895 [2024-11-18 04:58:02.169089] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:17:38.895 [2024-11-18 04:58:02.169108] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:38.895 [2024-11-18 04:58:02.169214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:38.895 [2024-11-18 04:58:02.169633] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:17:38.895 [2024-11-18 04:58:02.169648] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:17:38.895 [2024-11-18 04:58:02.169823] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.895 pt4 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.895 04:58:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.154 04:58:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.154 "name": "raid_bdev1", 00:17:39.154 "uuid": "f6cb7a40-aa39-4316-aae8-31b4b27f8f0a", 00:17:39.154 "strip_size_kb": 64, 00:17:39.154 "state": "online", 00:17:39.154 "raid_level": "raid0", 00:17:39.154 "superblock": true, 00:17:39.154 "num_base_bdevs": 4, 00:17:39.154 "num_base_bdevs_discovered": 4, 00:17:39.154 "num_base_bdevs_operational": 4, 00:17:39.154 "base_bdevs_list": [ 00:17:39.154 { 00:17:39.154 "name": "pt1", 00:17:39.154 "uuid": "e06afd51-3d08-5ab8-9485-c2b4458006a6", 00:17:39.154 "is_configured": true, 00:17:39.154 "data_offset": 2048, 00:17:39.154 "data_size": 63488 00:17:39.154 }, 00:17:39.154 { 00:17:39.154 "name": "pt2", 00:17:39.154 "uuid": "000ea3e4-a524-5b24-b521-c92753f7130d", 00:17:39.154 "is_configured": true, 00:17:39.154 "data_offset": 2048, 00:17:39.154 "data_size": 63488 00:17:39.154 }, 00:17:39.154 { 00:17:39.154 "name": "pt3", 00:17:39.154 "uuid": "d5e19606-a5d2-5292-a2a0-ed1cedc42642", 00:17:39.154 "is_configured": true, 00:17:39.154 "data_offset": 2048, 00:17:39.154 "data_size": 63488 00:17:39.154 }, 00:17:39.154 { 00:17:39.154 "name": "pt4", 00:17:39.154 "uuid": "6ec9b97a-00a8-59c7-9732-8826de2b7e25", 00:17:39.154 "is_configured": true, 00:17:39.154 "data_offset": 2048, 00:17:39.154 "data_size": 63488 00:17:39.154 } 00:17:39.154 ] 00:17:39.154 }' 00:17:39.154 04:58:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.154 04:58:02 -- common/autotest_common.sh@10 -- # set +x 00:17:39.412 04:58:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:39.412 04:58:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:39.671 [2024-11-18 04:58:02.952914] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.671 04:58:02 -- bdev/bdev_raid.sh@430 -- # '[' f6cb7a40-aa39-4316-aae8-31b4b27f8f0a '!=' f6cb7a40-aa39-4316-aae8-31b4b27f8f0a ']' 00:17:39.671 04:58:02 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:39.671 04:58:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:39.671 04:58:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:39.671 04:58:02 -- bdev/bdev_raid.sh@511 -- # killprocess 75395 00:17:39.671 04:58:02 -- common/autotest_common.sh@936 -- # '[' -z 75395 ']' 00:17:39.671 04:58:02 -- common/autotest_common.sh@940 -- # kill -0 75395 00:17:39.671 04:58:02 -- common/autotest_common.sh@941 -- # uname 00:17:39.671 04:58:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.671 04:58:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75395 00:17:39.671 killing process with pid 75395 00:17:39.671 04:58:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:39.671 04:58:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:39.671 04:58:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75395' 00:17:39.671 04:58:03 -- common/autotest_common.sh@955 -- # kill 75395 00:17:39.671 [2024-11-18 04:58:03.006961] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.671 04:58:03 -- common/autotest_common.sh@960 -- # wait 75395 00:17:39.671 [2024-11-18 04:58:03.007084] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.671 [2024-11-18 04:58:03.007180] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.671 [2024-11-18 04:58:03.007195] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:17:39.930 [2024-11-18 04:58:03.292963] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.866 04:58:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:40.866 00:17:40.866 real 0m10.319s 00:17:40.866 user 0m16.985s 00:17:40.866 sys 0m1.495s 00:17:40.866 ************************************ 00:17:40.866 END TEST raid_superblock_test 00:17:40.866 ************************************ 00:17:40.866 04:58:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:40.866 04:58:04 -- common/autotest_common.sh@10 -- # set +x 00:17:40.866 04:58:04 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:40.866 04:58:04 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:17:40.866 04:58:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:40.866 04:58:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:40.866 04:58:04 -- common/autotest_common.sh@10 -- # set +x 00:17:41.125 ************************************ 00:17:41.125 START TEST raid_state_function_test 00:17:41.125 ************************************ 00:17:41.125 04:58:04 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 false 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:41.125 Process raid pid: 75686 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=75686 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 75686' 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 75686 /var/tmp/spdk-raid.sock 00:17:41.125 04:58:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:41.125 04:58:04 -- common/autotest_common.sh@829 -- # '[' -z 75686 ']' 00:17:41.125 04:58:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:41.125 04:58:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.125 04:58:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:41.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:41.125 04:58:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.125 04:58:04 -- common/autotest_common.sh@10 -- # set +x 00:17:41.125 [2024-11-18 04:58:04.465863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:41.125 [2024-11-18 04:58:04.466387] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.125 [2024-11-18 04:58:04.632119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.384 [2024-11-18 04:58:04.801626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.642 [2024-11-18 04:58:04.965602] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.901 04:58:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.901 04:58:05 -- common/autotest_common.sh@862 -- # return 0 00:17:41.901 04:58:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:42.160 [2024-11-18 04:58:05.592960] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.160 [2024-11-18 04:58:05.593210] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.160 [2024-11-18 04:58:05.593238] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.160 [2024-11-18 04:58:05.593257] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.160 [2024-11-18 04:58:05.593267] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.160 [2024-11-18 04:58:05.593280] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.160 [2024-11-18 04:58:05.593289] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.160 [2024-11-18 04:58:05.593302] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.160 04:58:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.420 04:58:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.420 "name": "Existed_Raid", 00:17:42.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.420 "strip_size_kb": 64, 00:17:42.420 "state": "configuring", 00:17:42.420 "raid_level": "concat", 00:17:42.420 "superblock": false, 00:17:42.420 "num_base_bdevs": 4, 00:17:42.420 "num_base_bdevs_discovered": 0, 00:17:42.420 "num_base_bdevs_operational": 4, 00:17:42.420 "base_bdevs_list": [ 00:17:42.420 { 00:17:42.420 "name": "BaseBdev1", 00:17:42.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.420 "is_configured": false, 00:17:42.420 "data_offset": 0, 00:17:42.420 "data_size": 0 00:17:42.420 }, 00:17:42.420 { 00:17:42.420 "name": "BaseBdev2", 00:17:42.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.420 "is_configured": false, 00:17:42.420 "data_offset": 0, 00:17:42.420 "data_size": 0 00:17:42.420 }, 00:17:42.420 { 00:17:42.420 "name": "BaseBdev3", 00:17:42.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.420 "is_configured": false, 00:17:42.420 "data_offset": 0, 00:17:42.420 "data_size": 0 00:17:42.420 }, 00:17:42.420 { 00:17:42.420 "name": "BaseBdev4", 00:17:42.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.420 "is_configured": false, 00:17:42.420 "data_offset": 0, 00:17:42.420 "data_size": 0 00:17:42.420 } 00:17:42.420 ] 00:17:42.420 }' 00:17:42.420 04:58:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.420 04:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:42.678 04:58:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.937 [2024-11-18 04:58:06.277064] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.937 [2024-11-18 04:58:06.277125] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:42.937 04:58:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:43.196 [2024-11-18 04:58:06.533192] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.196 [2024-11-18 04:58:06.533316] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.196 [2024-11-18 04:58:06.533333] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.196 [2024-11-18 04:58:06.533351] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.196 [2024-11-18 04:58:06.533361] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:43.196 [2024-11-18 04:58:06.533376] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:43.196 [2024-11-18 04:58:06.533385] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:43.196 [2024-11-18 04:58:06.533399] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:43.196 04:58:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:43.454 [2024-11-18 04:58:06.809567] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.454 BaseBdev1 00:17:43.454 04:58:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:43.454 04:58:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:43.454 04:58:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:43.454 04:58:06 -- common/autotest_common.sh@899 -- # local i 00:17:43.454 04:58:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:43.454 04:58:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:43.454 04:58:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.713 04:58:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.970 [ 00:17:43.970 { 00:17:43.970 "name": "BaseBdev1", 00:17:43.970 "aliases": [ 00:17:43.970 "c42f2e11-3014-4d48-a9df-f84fb78e22dd" 00:17:43.970 ], 00:17:43.970 "product_name": "Malloc disk", 00:17:43.970 "block_size": 512, 00:17:43.970 "num_blocks": 65536, 00:17:43.970 "uuid": "c42f2e11-3014-4d48-a9df-f84fb78e22dd", 00:17:43.970 "assigned_rate_limits": { 00:17:43.971 "rw_ios_per_sec": 0, 00:17:43.971 "rw_mbytes_per_sec": 0, 00:17:43.971 "r_mbytes_per_sec": 0, 00:17:43.971 "w_mbytes_per_sec": 0 00:17:43.971 }, 00:17:43.971 "claimed": true, 00:17:43.971 "claim_type": "exclusive_write", 00:17:43.971 "zoned": false, 00:17:43.971 "supported_io_types": { 00:17:43.971 "read": true, 00:17:43.971 "write": true, 00:17:43.971 "unmap": true, 00:17:43.971 "write_zeroes": true, 00:17:43.971 "flush": true, 00:17:43.971 "reset": true, 00:17:43.971 "compare": false, 00:17:43.971 "compare_and_write": false, 00:17:43.971 "abort": true, 00:17:43.971 "nvme_admin": false, 00:17:43.971 "nvme_io": false 00:17:43.971 }, 00:17:43.971 "memory_domains": [ 00:17:43.971 { 00:17:43.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.971 "dma_device_type": 2 00:17:43.971 } 00:17:43.971 ], 00:17:43.971 "driver_specific": {} 00:17:43.971 } 00:17:43.971 ] 00:17:43.971 04:58:07 -- common/autotest_common.sh@905 -- # return 0 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.971 "name": "Existed_Raid", 00:17:43.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.971 "strip_size_kb": 64, 00:17:43.971 "state": "configuring", 00:17:43.971 "raid_level": "concat", 00:17:43.971 "superblock": false, 00:17:43.971 "num_base_bdevs": 4, 00:17:43.971 "num_base_bdevs_discovered": 1, 00:17:43.971 "num_base_bdevs_operational": 4, 00:17:43.971 "base_bdevs_list": [ 00:17:43.971 { 00:17:43.971 "name": "BaseBdev1", 00:17:43.971 "uuid": "c42f2e11-3014-4d48-a9df-f84fb78e22dd", 00:17:43.971 "is_configured": true, 00:17:43.971 "data_offset": 0, 00:17:43.971 "data_size": 65536 00:17:43.971 }, 00:17:43.971 { 00:17:43.971 "name": "BaseBdev2", 00:17:43.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.971 "is_configured": false, 00:17:43.971 "data_offset": 0, 00:17:43.971 "data_size": 0 00:17:43.971 }, 00:17:43.971 { 00:17:43.971 "name": "BaseBdev3", 00:17:43.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.971 "is_configured": false, 00:17:43.971 "data_offset": 0, 00:17:43.971 "data_size": 0 00:17:43.971 }, 00:17:43.971 { 00:17:43.971 "name": "BaseBdev4", 00:17:43.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.971 "is_configured": false, 00:17:43.971 "data_offset": 0, 00:17:43.971 "data_size": 0 00:17:43.971 } 00:17:43.971 ] 00:17:43.971 }' 00:17:43.971 04:58:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.971 04:58:07 -- common/autotest_common.sh@10 -- # set +x 00:17:44.536 04:58:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:44.536 [2024-11-18 04:58:08.033947] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.536 [2024-11-18 04:58:08.034001] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:44.536 04:58:08 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:44.536 04:58:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:44.795 [2024-11-18 04:58:08.242076] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.795 [2024-11-18 04:58:08.244357] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.795 [2024-11-18 04:58:08.244425] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.795 [2024-11-18 04:58:08.244441] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.795 [2024-11-18 04:58:08.244456] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.795 [2024-11-18 04:58:08.244466] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:44.795 [2024-11-18 04:58:08.244480] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.795 04:58:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.053 04:58:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.053 "name": "Existed_Raid", 00:17:45.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.053 "strip_size_kb": 64, 00:17:45.053 "state": "configuring", 00:17:45.053 "raid_level": "concat", 00:17:45.053 "superblock": false, 00:17:45.053 "num_base_bdevs": 4, 00:17:45.053 "num_base_bdevs_discovered": 1, 00:17:45.053 "num_base_bdevs_operational": 4, 00:17:45.053 "base_bdevs_list": [ 00:17:45.053 { 00:17:45.053 "name": "BaseBdev1", 00:17:45.053 "uuid": "c42f2e11-3014-4d48-a9df-f84fb78e22dd", 00:17:45.053 "is_configured": true, 00:17:45.053 "data_offset": 0, 00:17:45.053 "data_size": 65536 00:17:45.053 }, 00:17:45.053 { 00:17:45.053 "name": "BaseBdev2", 00:17:45.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.053 "is_configured": false, 00:17:45.053 "data_offset": 0, 00:17:45.053 "data_size": 0 00:17:45.053 }, 00:17:45.053 { 00:17:45.053 "name": "BaseBdev3", 00:17:45.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.053 "is_configured": false, 00:17:45.053 "data_offset": 0, 00:17:45.053 "data_size": 0 00:17:45.053 }, 00:17:45.053 { 00:17:45.053 "name": "BaseBdev4", 00:17:45.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.053 "is_configured": false, 00:17:45.053 "data_offset": 0, 00:17:45.053 "data_size": 0 00:17:45.053 } 00:17:45.053 ] 00:17:45.053 }' 00:17:45.053 04:58:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.053 04:58:08 -- common/autotest_common.sh@10 -- # set +x 00:17:45.312 04:58:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:45.570 [2024-11-18 04:58:09.065591] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.570 BaseBdev2 00:17:45.570 04:58:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:45.570 04:58:09 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:45.570 04:58:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:45.570 04:58:09 -- common/autotest_common.sh@899 -- # local i 00:17:45.570 04:58:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:45.570 04:58:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:45.570 04:58:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.828 04:58:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.087 [ 00:17:46.087 { 00:17:46.087 "name": "BaseBdev2", 00:17:46.087 "aliases": [ 00:17:46.087 "f04218ef-fd2d-490f-b2ed-8c19a1955d32" 00:17:46.087 ], 00:17:46.087 "product_name": "Malloc disk", 00:17:46.087 "block_size": 512, 00:17:46.087 "num_blocks": 65536, 00:17:46.087 "uuid": "f04218ef-fd2d-490f-b2ed-8c19a1955d32", 00:17:46.087 "assigned_rate_limits": { 00:17:46.087 "rw_ios_per_sec": 0, 00:17:46.087 "rw_mbytes_per_sec": 0, 00:17:46.087 "r_mbytes_per_sec": 0, 00:17:46.087 "w_mbytes_per_sec": 0 00:17:46.087 }, 00:17:46.087 "claimed": true, 00:17:46.087 "claim_type": "exclusive_write", 00:17:46.087 "zoned": false, 00:17:46.087 "supported_io_types": { 00:17:46.087 "read": true, 00:17:46.087 "write": true, 00:17:46.087 "unmap": true, 00:17:46.087 "write_zeroes": true, 00:17:46.087 "flush": true, 00:17:46.087 "reset": true, 00:17:46.087 "compare": false, 00:17:46.087 "compare_and_write": false, 00:17:46.087 "abort": true, 00:17:46.087 "nvme_admin": false, 00:17:46.087 "nvme_io": false 00:17:46.087 }, 00:17:46.087 "memory_domains": [ 00:17:46.087 { 00:17:46.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.087 "dma_device_type": 2 00:17:46.087 } 00:17:46.087 ], 00:17:46.087 "driver_specific": {} 00:17:46.087 } 00:17:46.087 ] 00:17:46.087 04:58:09 -- common/autotest_common.sh@905 -- # return 0 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.087 04:58:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.345 04:58:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.345 "name": "Existed_Raid", 00:17:46.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.345 "strip_size_kb": 64, 00:17:46.345 "state": "configuring", 00:17:46.345 "raid_level": "concat", 00:17:46.345 "superblock": false, 00:17:46.345 "num_base_bdevs": 4, 00:17:46.345 "num_base_bdevs_discovered": 2, 00:17:46.345 "num_base_bdevs_operational": 4, 00:17:46.345 "base_bdevs_list": [ 00:17:46.345 { 00:17:46.345 "name": "BaseBdev1", 00:17:46.345 "uuid": "c42f2e11-3014-4d48-a9df-f84fb78e22dd", 00:17:46.345 "is_configured": true, 00:17:46.345 "data_offset": 0, 00:17:46.345 "data_size": 65536 00:17:46.345 }, 00:17:46.345 { 00:17:46.345 "name": "BaseBdev2", 00:17:46.345 "uuid": "f04218ef-fd2d-490f-b2ed-8c19a1955d32", 00:17:46.345 "is_configured": true, 00:17:46.345 "data_offset": 0, 00:17:46.345 "data_size": 65536 00:17:46.345 }, 00:17:46.345 { 00:17:46.346 "name": "BaseBdev3", 00:17:46.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.346 "is_configured": false, 00:17:46.346 "data_offset": 0, 00:17:46.346 "data_size": 0 00:17:46.346 }, 00:17:46.346 { 00:17:46.346 "name": "BaseBdev4", 00:17:46.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.346 "is_configured": false, 00:17:46.346 "data_offset": 0, 00:17:46.346 "data_size": 0 00:17:46.346 } 00:17:46.346 ] 00:17:46.346 }' 00:17:46.346 04:58:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.346 04:58:09 -- common/autotest_common.sh@10 -- # set +x 00:17:46.604 04:58:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:46.864 [2024-11-18 04:58:10.376044] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.864 BaseBdev3 00:17:47.124 04:58:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:47.124 04:58:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:47.124 04:58:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:47.124 04:58:10 -- common/autotest_common.sh@899 -- # local i 00:17:47.124 04:58:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:47.124 04:58:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:47.124 04:58:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.383 04:58:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:47.383 [ 00:17:47.383 { 00:17:47.383 "name": "BaseBdev3", 00:17:47.383 "aliases": [ 00:17:47.383 "4394d582-973b-4244-976b-bc8d0188c7cb" 00:17:47.383 ], 00:17:47.383 "product_name": "Malloc disk", 00:17:47.383 "block_size": 512, 00:17:47.383 "num_blocks": 65536, 00:17:47.383 "uuid": "4394d582-973b-4244-976b-bc8d0188c7cb", 00:17:47.383 "assigned_rate_limits": { 00:17:47.383 "rw_ios_per_sec": 0, 00:17:47.383 "rw_mbytes_per_sec": 0, 00:17:47.383 "r_mbytes_per_sec": 0, 00:17:47.383 "w_mbytes_per_sec": 0 00:17:47.383 }, 00:17:47.383 "claimed": true, 00:17:47.383 "claim_type": "exclusive_write", 00:17:47.383 "zoned": false, 00:17:47.383 "supported_io_types": { 00:17:47.383 "read": true, 00:17:47.383 "write": true, 00:17:47.383 "unmap": true, 00:17:47.383 "write_zeroes": true, 00:17:47.383 "flush": true, 00:17:47.383 "reset": true, 00:17:47.383 "compare": false, 00:17:47.383 "compare_and_write": false, 00:17:47.383 "abort": true, 00:17:47.383 "nvme_admin": false, 00:17:47.383 "nvme_io": false 00:17:47.383 }, 00:17:47.383 "memory_domains": [ 00:17:47.383 { 00:17:47.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.383 "dma_device_type": 2 00:17:47.383 } 00:17:47.383 ], 00:17:47.383 "driver_specific": {} 00:17:47.383 } 00:17:47.383 ] 00:17:47.383 04:58:10 -- common/autotest_common.sh@905 -- # return 0 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.383 04:58:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.642 04:58:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.642 "name": "Existed_Raid", 00:17:47.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.642 "strip_size_kb": 64, 00:17:47.642 "state": "configuring", 00:17:47.642 "raid_level": "concat", 00:17:47.642 "superblock": false, 00:17:47.642 "num_base_bdevs": 4, 00:17:47.642 "num_base_bdevs_discovered": 3, 00:17:47.642 "num_base_bdevs_operational": 4, 00:17:47.642 "base_bdevs_list": [ 00:17:47.642 { 00:17:47.642 "name": "BaseBdev1", 00:17:47.642 "uuid": "c42f2e11-3014-4d48-a9df-f84fb78e22dd", 00:17:47.642 "is_configured": true, 00:17:47.642 "data_offset": 0, 00:17:47.642 "data_size": 65536 00:17:47.642 }, 00:17:47.642 { 00:17:47.642 "name": "BaseBdev2", 00:17:47.642 "uuid": "f04218ef-fd2d-490f-b2ed-8c19a1955d32", 00:17:47.642 "is_configured": true, 00:17:47.642 "data_offset": 0, 00:17:47.642 "data_size": 65536 00:17:47.642 }, 00:17:47.642 { 00:17:47.642 "name": "BaseBdev3", 00:17:47.642 "uuid": "4394d582-973b-4244-976b-bc8d0188c7cb", 00:17:47.642 "is_configured": true, 00:17:47.642 "data_offset": 0, 00:17:47.642 "data_size": 65536 00:17:47.642 }, 00:17:47.642 { 00:17:47.642 "name": "BaseBdev4", 00:17:47.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.642 "is_configured": false, 00:17:47.642 "data_offset": 0, 00:17:47.642 "data_size": 0 00:17:47.642 } 00:17:47.642 ] 00:17:47.642 }' 00:17:47.642 04:58:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.642 04:58:11 -- common/autotest_common.sh@10 -- # set +x 00:17:47.901 04:58:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:48.160 [2024-11-18 04:58:11.635705] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:48.160 [2024-11-18 04:58:11.636007] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:17:48.160 [2024-11-18 04:58:11.636040] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:48.160 [2024-11-18 04:58:11.636186] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:48.160 [2024-11-18 04:58:11.636664] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:17:48.160 [2024-11-18 04:58:11.636685] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:17:48.160 [2024-11-18 04:58:11.636999] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.160 BaseBdev4 00:17:48.160 04:58:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:48.160 04:58:11 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:48.160 04:58:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:48.160 04:58:11 -- common/autotest_common.sh@899 -- # local i 00:17:48.160 04:58:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:48.160 04:58:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:48.160 04:58:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:48.419 04:58:11 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:48.696 [ 00:17:48.696 { 00:17:48.696 "name": "BaseBdev4", 00:17:48.696 "aliases": [ 00:17:48.696 "2a2fe60b-66a3-43d9-b983-38ae7d5a7802" 00:17:48.696 ], 00:17:48.696 "product_name": "Malloc disk", 00:17:48.696 "block_size": 512, 00:17:48.696 "num_blocks": 65536, 00:17:48.696 "uuid": "2a2fe60b-66a3-43d9-b983-38ae7d5a7802", 00:17:48.696 "assigned_rate_limits": { 00:17:48.696 "rw_ios_per_sec": 0, 00:17:48.696 "rw_mbytes_per_sec": 0, 00:17:48.696 "r_mbytes_per_sec": 0, 00:17:48.696 "w_mbytes_per_sec": 0 00:17:48.696 }, 00:17:48.696 "claimed": true, 00:17:48.696 "claim_type": "exclusive_write", 00:17:48.696 "zoned": false, 00:17:48.696 "supported_io_types": { 00:17:48.696 "read": true, 00:17:48.696 "write": true, 00:17:48.696 "unmap": true, 00:17:48.696 "write_zeroes": true, 00:17:48.696 "flush": true, 00:17:48.696 "reset": true, 00:17:48.696 "compare": false, 00:17:48.696 "compare_and_write": false, 00:17:48.696 "abort": true, 00:17:48.696 "nvme_admin": false, 00:17:48.696 "nvme_io": false 00:17:48.696 }, 00:17:48.696 "memory_domains": [ 00:17:48.696 { 00:17:48.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.696 "dma_device_type": 2 00:17:48.696 } 00:17:48.696 ], 00:17:48.696 "driver_specific": {} 00:17:48.696 } 00:17:48.696 ] 00:17:48.696 04:58:12 -- common/autotest_common.sh@905 -- # return 0 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.696 04:58:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.953 04:58:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.953 "name": "Existed_Raid", 00:17:48.953 "uuid": "df1e40bf-3eba-4918-9b4e-a50c2be61bc0", 00:17:48.954 "strip_size_kb": 64, 00:17:48.954 "state": "online", 00:17:48.954 "raid_level": "concat", 00:17:48.954 "superblock": false, 00:17:48.954 "num_base_bdevs": 4, 00:17:48.954 "num_base_bdevs_discovered": 4, 00:17:48.954 "num_base_bdevs_operational": 4, 00:17:48.954 "base_bdevs_list": [ 00:17:48.954 { 00:17:48.954 "name": "BaseBdev1", 00:17:48.954 "uuid": "c42f2e11-3014-4d48-a9df-f84fb78e22dd", 00:17:48.954 "is_configured": true, 00:17:48.954 "data_offset": 0, 00:17:48.954 "data_size": 65536 00:17:48.954 }, 00:17:48.954 { 00:17:48.954 "name": "BaseBdev2", 00:17:48.954 "uuid": "f04218ef-fd2d-490f-b2ed-8c19a1955d32", 00:17:48.954 "is_configured": true, 00:17:48.954 "data_offset": 0, 00:17:48.954 "data_size": 65536 00:17:48.954 }, 00:17:48.954 { 00:17:48.954 "name": "BaseBdev3", 00:17:48.954 "uuid": "4394d582-973b-4244-976b-bc8d0188c7cb", 00:17:48.954 "is_configured": true, 00:17:48.954 "data_offset": 0, 00:17:48.954 "data_size": 65536 00:17:48.954 }, 00:17:48.954 { 00:17:48.954 "name": "BaseBdev4", 00:17:48.954 "uuid": "2a2fe60b-66a3-43d9-b983-38ae7d5a7802", 00:17:48.954 "is_configured": true, 00:17:48.954 "data_offset": 0, 00:17:48.954 "data_size": 65536 00:17:48.954 } 00:17:48.954 ] 00:17:48.954 }' 00:17:48.954 04:58:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.954 04:58:12 -- common/autotest_common.sh@10 -- # set +x 00:17:49.212 04:58:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:49.471 [2024-11-18 04:58:12.824128] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:49.471 [2024-11-18 04:58:12.824410] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.471 [2024-11-18 04:58:12.824611] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.471 04:58:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.729 04:58:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:49.729 "name": "Existed_Raid", 00:17:49.729 "uuid": "df1e40bf-3eba-4918-9b4e-a50c2be61bc0", 00:17:49.729 "strip_size_kb": 64, 00:17:49.729 "state": "offline", 00:17:49.729 "raid_level": "concat", 00:17:49.729 "superblock": false, 00:17:49.729 "num_base_bdevs": 4, 00:17:49.729 "num_base_bdevs_discovered": 3, 00:17:49.729 "num_base_bdevs_operational": 3, 00:17:49.729 "base_bdevs_list": [ 00:17:49.729 { 00:17:49.729 "name": null, 00:17:49.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.729 "is_configured": false, 00:17:49.729 "data_offset": 0, 00:17:49.729 "data_size": 65536 00:17:49.729 }, 00:17:49.729 { 00:17:49.729 "name": "BaseBdev2", 00:17:49.729 "uuid": "f04218ef-fd2d-490f-b2ed-8c19a1955d32", 00:17:49.729 "is_configured": true, 00:17:49.729 "data_offset": 0, 00:17:49.729 "data_size": 65536 00:17:49.729 }, 00:17:49.729 { 00:17:49.729 "name": "BaseBdev3", 00:17:49.729 "uuid": "4394d582-973b-4244-976b-bc8d0188c7cb", 00:17:49.729 "is_configured": true, 00:17:49.729 "data_offset": 0, 00:17:49.729 "data_size": 65536 00:17:49.729 }, 00:17:49.729 { 00:17:49.729 "name": "BaseBdev4", 00:17:49.729 "uuid": "2a2fe60b-66a3-43d9-b983-38ae7d5a7802", 00:17:49.729 "is_configured": true, 00:17:49.729 "data_offset": 0, 00:17:49.729 "data_size": 65536 00:17:49.729 } 00:17:49.729 ] 00:17:49.729 }' 00:17:49.729 04:58:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:49.729 04:58:13 -- common/autotest_common.sh@10 -- # set +x 00:17:49.988 04:58:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:49.988 04:58:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.988 04:58:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.988 04:58:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.246 04:58:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.246 04:58:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.246 04:58:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:50.505 [2024-11-18 04:58:13.895474] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.505 04:58:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.505 04:58:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.505 04:58:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.505 04:58:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.763 04:58:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.763 04:58:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.763 04:58:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:51.022 [2024-11-18 04:58:14.450405] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:51.281 04:58:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:51.281 04:58:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:51.281 04:58:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:51.281 04:58:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.281 04:58:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:51.281 04:58:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:51.281 04:58:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:51.539 [2024-11-18 04:58:15.027360] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:51.539 [2024-11-18 04:58:15.027641] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:17:51.798 04:58:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:51.798 04:58:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:51.798 04:58:15 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.798 04:58:15 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:52.057 04:58:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:52.057 04:58:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:52.057 04:58:15 -- bdev/bdev_raid.sh@287 -- # killprocess 75686 00:17:52.057 04:58:15 -- common/autotest_common.sh@936 -- # '[' -z 75686 ']' 00:17:52.057 04:58:15 -- common/autotest_common.sh@940 -- # kill -0 75686 00:17:52.057 04:58:15 -- common/autotest_common.sh@941 -- # uname 00:17:52.057 04:58:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:52.057 04:58:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75686 00:17:52.057 killing process with pid 75686 00:17:52.057 04:58:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:52.057 04:58:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:52.057 04:58:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75686' 00:17:52.057 04:58:15 -- common/autotest_common.sh@955 -- # kill 75686 00:17:52.057 [2024-11-18 04:58:15.355635] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.057 04:58:15 -- common/autotest_common.sh@960 -- # wait 75686 00:17:52.057 [2024-11-18 04:58:15.355742] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.993 ************************************ 00:17:52.993 END TEST raid_state_function_test 00:17:52.993 ************************************ 00:17:52.993 04:58:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:52.993 00:17:52.993 real 0m12.005s 00:17:52.993 user 0m20.149s 00:17:52.993 sys 0m1.786s 00:17:52.993 04:58:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:52.993 04:58:16 -- common/autotest_common.sh@10 -- # set +x 00:17:52.993 04:58:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:17:52.993 04:58:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:52.993 04:58:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:52.994 04:58:16 -- common/autotest_common.sh@10 -- # set +x 00:17:52.994 ************************************ 00:17:52.994 START TEST raid_state_function_test_sb 00:17:52.994 ************************************ 00:17:52.994 04:58:16 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 true 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=76080 00:17:52.994 Process raid pid: 76080 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 76080' 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 76080 /var/tmp/spdk-raid.sock 00:17:52.994 04:58:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:52.994 04:58:16 -- common/autotest_common.sh@829 -- # '[' -z 76080 ']' 00:17:52.994 04:58:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:52.994 04:58:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.994 04:58:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:52.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:52.994 04:58:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.994 04:58:16 -- common/autotest_common.sh@10 -- # set +x 00:17:53.253 [2024-11-18 04:58:16.527594] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:53.253 [2024-11-18 04:58:16.527954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.253 [2024-11-18 04:58:16.701428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.511 [2024-11-18 04:58:16.882396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.770 [2024-11-18 04:58:17.056817] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.030 04:58:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.030 04:58:17 -- common/autotest_common.sh@862 -- # return 0 00:17:54.030 04:58:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:54.290 [2024-11-18 04:58:17.611116] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.290 [2024-11-18 04:58:17.611393] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.290 [2024-11-18 04:58:17.611422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.290 [2024-11-18 04:58:17.611442] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.290 [2024-11-18 04:58:17.611453] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:54.290 [2024-11-18 04:58:17.611466] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:54.290 [2024-11-18 04:58:17.611476] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:54.290 [2024-11-18 04:58:17.611489] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.290 04:58:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.548 04:58:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.548 "name": "Existed_Raid", 00:17:54.548 "uuid": "4b47a14a-b82e-476f-b452-82864148a5b5", 00:17:54.548 "strip_size_kb": 64, 00:17:54.548 "state": "configuring", 00:17:54.548 "raid_level": "concat", 00:17:54.548 "superblock": true, 00:17:54.548 "num_base_bdevs": 4, 00:17:54.548 "num_base_bdevs_discovered": 0, 00:17:54.548 "num_base_bdevs_operational": 4, 00:17:54.548 "base_bdevs_list": [ 00:17:54.548 { 00:17:54.548 "name": "BaseBdev1", 00:17:54.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.548 "is_configured": false, 00:17:54.548 "data_offset": 0, 00:17:54.548 "data_size": 0 00:17:54.548 }, 00:17:54.548 { 00:17:54.548 "name": "BaseBdev2", 00:17:54.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.548 "is_configured": false, 00:17:54.548 "data_offset": 0, 00:17:54.548 "data_size": 0 00:17:54.548 }, 00:17:54.548 { 00:17:54.548 "name": "BaseBdev3", 00:17:54.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.548 "is_configured": false, 00:17:54.548 "data_offset": 0, 00:17:54.548 "data_size": 0 00:17:54.548 }, 00:17:54.548 { 00:17:54.548 "name": "BaseBdev4", 00:17:54.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.548 "is_configured": false, 00:17:54.548 "data_offset": 0, 00:17:54.548 "data_size": 0 00:17:54.548 } 00:17:54.548 ] 00:17:54.548 }' 00:17:54.548 04:58:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.548 04:58:17 -- common/autotest_common.sh@10 -- # set +x 00:17:54.806 04:58:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:55.065 [2024-11-18 04:58:18.439172] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.065 [2024-11-18 04:58:18.439276] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:55.065 04:58:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:55.323 [2024-11-18 04:58:18.695355] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.323 [2024-11-18 04:58:18.695610] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.323 [2024-11-18 04:58:18.695757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.323 [2024-11-18 04:58:18.695892] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.323 [2024-11-18 04:58:18.696032] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:55.323 [2024-11-18 04:58:18.696065] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:55.323 [2024-11-18 04:58:18.696077] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:55.323 [2024-11-18 04:58:18.696091] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:55.323 04:58:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:55.581 [2024-11-18 04:58:18.932093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.581 BaseBdev1 00:17:55.581 04:58:18 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:55.581 04:58:18 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:55.581 04:58:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:55.581 04:58:18 -- common/autotest_common.sh@899 -- # local i 00:17:55.581 04:58:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:55.581 04:58:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:55.581 04:58:18 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.838 04:58:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.838 [ 00:17:55.838 { 00:17:55.838 "name": "BaseBdev1", 00:17:55.838 "aliases": [ 00:17:55.838 "939df505-fb2c-4a61-929f-e4b7daac94b6" 00:17:55.838 ], 00:17:55.838 "product_name": "Malloc disk", 00:17:55.838 "block_size": 512, 00:17:55.838 "num_blocks": 65536, 00:17:55.838 "uuid": "939df505-fb2c-4a61-929f-e4b7daac94b6", 00:17:55.838 "assigned_rate_limits": { 00:17:55.838 "rw_ios_per_sec": 0, 00:17:55.838 "rw_mbytes_per_sec": 0, 00:17:55.838 "r_mbytes_per_sec": 0, 00:17:55.838 "w_mbytes_per_sec": 0 00:17:55.838 }, 00:17:55.838 "claimed": true, 00:17:55.838 "claim_type": "exclusive_write", 00:17:55.838 "zoned": false, 00:17:55.838 "supported_io_types": { 00:17:55.838 "read": true, 00:17:55.838 "write": true, 00:17:55.838 "unmap": true, 00:17:55.838 "write_zeroes": true, 00:17:55.838 "flush": true, 00:17:55.838 "reset": true, 00:17:55.838 "compare": false, 00:17:55.838 "compare_and_write": false, 00:17:55.838 "abort": true, 00:17:55.838 "nvme_admin": false, 00:17:55.838 "nvme_io": false 00:17:55.838 }, 00:17:55.838 "memory_domains": [ 00:17:55.838 { 00:17:55.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.838 "dma_device_type": 2 00:17:55.838 } 00:17:55.838 ], 00:17:55.838 "driver_specific": {} 00:17:55.838 } 00:17:55.838 ] 00:17:55.838 04:58:19 -- common/autotest_common.sh@905 -- # return 0 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.838 04:58:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.096 04:58:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.096 "name": "Existed_Raid", 00:17:56.096 "uuid": "5cb643d0-bb0d-43d8-b367-6844d6817b43", 00:17:56.096 "strip_size_kb": 64, 00:17:56.096 "state": "configuring", 00:17:56.096 "raid_level": "concat", 00:17:56.096 "superblock": true, 00:17:56.096 "num_base_bdevs": 4, 00:17:56.096 "num_base_bdevs_discovered": 1, 00:17:56.096 "num_base_bdevs_operational": 4, 00:17:56.096 "base_bdevs_list": [ 00:17:56.096 { 00:17:56.096 "name": "BaseBdev1", 00:17:56.096 "uuid": "939df505-fb2c-4a61-929f-e4b7daac94b6", 00:17:56.096 "is_configured": true, 00:17:56.096 "data_offset": 2048, 00:17:56.096 "data_size": 63488 00:17:56.096 }, 00:17:56.096 { 00:17:56.096 "name": "BaseBdev2", 00:17:56.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.096 "is_configured": false, 00:17:56.096 "data_offset": 0, 00:17:56.096 "data_size": 0 00:17:56.096 }, 00:17:56.096 { 00:17:56.096 "name": "BaseBdev3", 00:17:56.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.096 "is_configured": false, 00:17:56.096 "data_offset": 0, 00:17:56.096 "data_size": 0 00:17:56.096 }, 00:17:56.096 { 00:17:56.096 "name": "BaseBdev4", 00:17:56.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.096 "is_configured": false, 00:17:56.096 "data_offset": 0, 00:17:56.096 "data_size": 0 00:17:56.096 } 00:17:56.096 ] 00:17:56.096 }' 00:17:56.096 04:58:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.096 04:58:19 -- common/autotest_common.sh@10 -- # set +x 00:17:56.662 04:58:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:56.662 [2024-11-18 04:58:20.132542] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.662 [2024-11-18 04:58:20.132634] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:56.662 04:58:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:56.662 04:58:20 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:56.920 04:58:20 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:57.179 BaseBdev1 00:17:57.179 04:58:20 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:57.179 04:58:20 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:57.179 04:58:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:57.179 04:58:20 -- common/autotest_common.sh@899 -- # local i 00:17:57.179 04:58:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:57.179 04:58:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:57.179 04:58:20 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.437 04:58:20 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:57.696 [ 00:17:57.696 { 00:17:57.696 "name": "BaseBdev1", 00:17:57.696 "aliases": [ 00:17:57.696 "8038e3ac-91a9-4e5b-ab0e-8b80b4d35fa2" 00:17:57.696 ], 00:17:57.696 "product_name": "Malloc disk", 00:17:57.696 "block_size": 512, 00:17:57.696 "num_blocks": 65536, 00:17:57.696 "uuid": "8038e3ac-91a9-4e5b-ab0e-8b80b4d35fa2", 00:17:57.696 "assigned_rate_limits": { 00:17:57.696 "rw_ios_per_sec": 0, 00:17:57.696 "rw_mbytes_per_sec": 0, 00:17:57.696 "r_mbytes_per_sec": 0, 00:17:57.696 "w_mbytes_per_sec": 0 00:17:57.696 }, 00:17:57.696 "claimed": false, 00:17:57.696 "zoned": false, 00:17:57.696 "supported_io_types": { 00:17:57.696 "read": true, 00:17:57.696 "write": true, 00:17:57.696 "unmap": true, 00:17:57.696 "write_zeroes": true, 00:17:57.696 "flush": true, 00:17:57.696 "reset": true, 00:17:57.696 "compare": false, 00:17:57.696 "compare_and_write": false, 00:17:57.696 "abort": true, 00:17:57.696 "nvme_admin": false, 00:17:57.696 "nvme_io": false 00:17:57.696 }, 00:17:57.696 "memory_domains": [ 00:17:57.696 { 00:17:57.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.696 "dma_device_type": 2 00:17:57.696 } 00:17:57.696 ], 00:17:57.696 "driver_specific": {} 00:17:57.696 } 00:17:57.696 ] 00:17:57.696 04:58:21 -- common/autotest_common.sh@905 -- # return 0 00:17:57.696 04:58:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:57.955 [2024-11-18 04:58:21.319107] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.955 [2024-11-18 04:58:21.321193] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.955 [2024-11-18 04:58:21.321256] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.955 [2024-11-18 04:58:21.321289] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:57.955 [2024-11-18 04:58:21.321305] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:57.955 [2024-11-18 04:58:21.321314] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:57.955 [2024-11-18 04:58:21.321330] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.955 04:58:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.214 04:58:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.214 "name": "Existed_Raid", 00:17:58.214 "uuid": "446b38be-e050-41e8-9c0b-9a56849aad88", 00:17:58.214 "strip_size_kb": 64, 00:17:58.214 "state": "configuring", 00:17:58.214 "raid_level": "concat", 00:17:58.214 "superblock": true, 00:17:58.214 "num_base_bdevs": 4, 00:17:58.214 "num_base_bdevs_discovered": 1, 00:17:58.214 "num_base_bdevs_operational": 4, 00:17:58.214 "base_bdevs_list": [ 00:17:58.214 { 00:17:58.214 "name": "BaseBdev1", 00:17:58.214 "uuid": "8038e3ac-91a9-4e5b-ab0e-8b80b4d35fa2", 00:17:58.214 "is_configured": true, 00:17:58.214 "data_offset": 2048, 00:17:58.214 "data_size": 63488 00:17:58.214 }, 00:17:58.214 { 00:17:58.214 "name": "BaseBdev2", 00:17:58.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.214 "is_configured": false, 00:17:58.214 "data_offset": 0, 00:17:58.214 "data_size": 0 00:17:58.214 }, 00:17:58.214 { 00:17:58.214 "name": "BaseBdev3", 00:17:58.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.214 "is_configured": false, 00:17:58.214 "data_offset": 0, 00:17:58.214 "data_size": 0 00:17:58.214 }, 00:17:58.214 { 00:17:58.214 "name": "BaseBdev4", 00:17:58.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.214 "is_configured": false, 00:17:58.214 "data_offset": 0, 00:17:58.214 "data_size": 0 00:17:58.214 } 00:17:58.214 ] 00:17:58.214 }' 00:17:58.214 04:58:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.214 04:58:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.473 04:58:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:58.732 [2024-11-18 04:58:22.137443] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.732 BaseBdev2 00:17:58.732 04:58:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:58.732 04:58:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:58.732 04:58:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:58.732 04:58:22 -- common/autotest_common.sh@899 -- # local i 00:17:58.732 04:58:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:58.732 04:58:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:58.732 04:58:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:58.991 04:58:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:59.250 [ 00:17:59.250 { 00:17:59.250 "name": "BaseBdev2", 00:17:59.250 "aliases": [ 00:17:59.250 "45897934-c401-41f4-93b9-221f318ff183" 00:17:59.250 ], 00:17:59.250 "product_name": "Malloc disk", 00:17:59.250 "block_size": 512, 00:17:59.250 "num_blocks": 65536, 00:17:59.250 "uuid": "45897934-c401-41f4-93b9-221f318ff183", 00:17:59.250 "assigned_rate_limits": { 00:17:59.250 "rw_ios_per_sec": 0, 00:17:59.250 "rw_mbytes_per_sec": 0, 00:17:59.250 "r_mbytes_per_sec": 0, 00:17:59.250 "w_mbytes_per_sec": 0 00:17:59.250 }, 00:17:59.250 "claimed": true, 00:17:59.250 "claim_type": "exclusive_write", 00:17:59.250 "zoned": false, 00:17:59.250 "supported_io_types": { 00:17:59.250 "read": true, 00:17:59.250 "write": true, 00:17:59.250 "unmap": true, 00:17:59.250 "write_zeroes": true, 00:17:59.250 "flush": true, 00:17:59.250 "reset": true, 00:17:59.250 "compare": false, 00:17:59.250 "compare_and_write": false, 00:17:59.250 "abort": true, 00:17:59.250 "nvme_admin": false, 00:17:59.250 "nvme_io": false 00:17:59.250 }, 00:17:59.250 "memory_domains": [ 00:17:59.250 { 00:17:59.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.250 "dma_device_type": 2 00:17:59.250 } 00:17:59.250 ], 00:17:59.250 "driver_specific": {} 00:17:59.250 } 00:17:59.250 ] 00:17:59.250 04:58:22 -- common/autotest_common.sh@905 -- # return 0 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.250 "name": "Existed_Raid", 00:17:59.250 "uuid": "446b38be-e050-41e8-9c0b-9a56849aad88", 00:17:59.250 "strip_size_kb": 64, 00:17:59.250 "state": "configuring", 00:17:59.250 "raid_level": "concat", 00:17:59.250 "superblock": true, 00:17:59.250 "num_base_bdevs": 4, 00:17:59.250 "num_base_bdevs_discovered": 2, 00:17:59.250 "num_base_bdevs_operational": 4, 00:17:59.250 "base_bdevs_list": [ 00:17:59.250 { 00:17:59.250 "name": "BaseBdev1", 00:17:59.250 "uuid": "8038e3ac-91a9-4e5b-ab0e-8b80b4d35fa2", 00:17:59.250 "is_configured": true, 00:17:59.250 "data_offset": 2048, 00:17:59.250 "data_size": 63488 00:17:59.250 }, 00:17:59.250 { 00:17:59.250 "name": "BaseBdev2", 00:17:59.250 "uuid": "45897934-c401-41f4-93b9-221f318ff183", 00:17:59.250 "is_configured": true, 00:17:59.250 "data_offset": 2048, 00:17:59.250 "data_size": 63488 00:17:59.250 }, 00:17:59.250 { 00:17:59.250 "name": "BaseBdev3", 00:17:59.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.250 "is_configured": false, 00:17:59.250 "data_offset": 0, 00:17:59.250 "data_size": 0 00:17:59.250 }, 00:17:59.250 { 00:17:59.250 "name": "BaseBdev4", 00:17:59.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.250 "is_configured": false, 00:17:59.250 "data_offset": 0, 00:17:59.250 "data_size": 0 00:17:59.250 } 00:17:59.250 ] 00:17:59.250 }' 00:17:59.250 04:58:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.251 04:58:22 -- common/autotest_common.sh@10 -- # set +x 00:17:59.818 04:58:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:59.818 [2024-11-18 04:58:23.303742] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:59.818 BaseBdev3 00:17:59.818 04:58:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:59.818 04:58:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:59.818 04:58:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:59.818 04:58:23 -- common/autotest_common.sh@899 -- # local i 00:17:59.818 04:58:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:59.818 04:58:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:59.818 04:58:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:00.077 04:58:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:00.336 [ 00:18:00.336 { 00:18:00.336 "name": "BaseBdev3", 00:18:00.336 "aliases": [ 00:18:00.336 "a9c4063c-e3af-43bc-a5de-ef9d5f4822c0" 00:18:00.336 ], 00:18:00.336 "product_name": "Malloc disk", 00:18:00.336 "block_size": 512, 00:18:00.336 "num_blocks": 65536, 00:18:00.336 "uuid": "a9c4063c-e3af-43bc-a5de-ef9d5f4822c0", 00:18:00.336 "assigned_rate_limits": { 00:18:00.336 "rw_ios_per_sec": 0, 00:18:00.336 "rw_mbytes_per_sec": 0, 00:18:00.336 "r_mbytes_per_sec": 0, 00:18:00.336 "w_mbytes_per_sec": 0 00:18:00.336 }, 00:18:00.336 "claimed": true, 00:18:00.336 "claim_type": "exclusive_write", 00:18:00.336 "zoned": false, 00:18:00.336 "supported_io_types": { 00:18:00.336 "read": true, 00:18:00.336 "write": true, 00:18:00.336 "unmap": true, 00:18:00.336 "write_zeroes": true, 00:18:00.336 "flush": true, 00:18:00.336 "reset": true, 00:18:00.336 "compare": false, 00:18:00.336 "compare_and_write": false, 00:18:00.336 "abort": true, 00:18:00.336 "nvme_admin": false, 00:18:00.336 "nvme_io": false 00:18:00.336 }, 00:18:00.336 "memory_domains": [ 00:18:00.336 { 00:18:00.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.336 "dma_device_type": 2 00:18:00.336 } 00:18:00.336 ], 00:18:00.336 "driver_specific": {} 00:18:00.336 } 00:18:00.336 ] 00:18:00.336 04:58:23 -- common/autotest_common.sh@905 -- # return 0 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.336 04:58:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.596 04:58:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.596 "name": "Existed_Raid", 00:18:00.596 "uuid": "446b38be-e050-41e8-9c0b-9a56849aad88", 00:18:00.596 "strip_size_kb": 64, 00:18:00.596 "state": "configuring", 00:18:00.596 "raid_level": "concat", 00:18:00.596 "superblock": true, 00:18:00.596 "num_base_bdevs": 4, 00:18:00.596 "num_base_bdevs_discovered": 3, 00:18:00.596 "num_base_bdevs_operational": 4, 00:18:00.596 "base_bdevs_list": [ 00:18:00.596 { 00:18:00.596 "name": "BaseBdev1", 00:18:00.596 "uuid": "8038e3ac-91a9-4e5b-ab0e-8b80b4d35fa2", 00:18:00.596 "is_configured": true, 00:18:00.596 "data_offset": 2048, 00:18:00.596 "data_size": 63488 00:18:00.596 }, 00:18:00.596 { 00:18:00.596 "name": "BaseBdev2", 00:18:00.596 "uuid": "45897934-c401-41f4-93b9-221f318ff183", 00:18:00.596 "is_configured": true, 00:18:00.596 "data_offset": 2048, 00:18:00.596 "data_size": 63488 00:18:00.596 }, 00:18:00.596 { 00:18:00.596 "name": "BaseBdev3", 00:18:00.596 "uuid": "a9c4063c-e3af-43bc-a5de-ef9d5f4822c0", 00:18:00.596 "is_configured": true, 00:18:00.596 "data_offset": 2048, 00:18:00.596 "data_size": 63488 00:18:00.596 }, 00:18:00.596 { 00:18:00.596 "name": "BaseBdev4", 00:18:00.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.596 "is_configured": false, 00:18:00.596 "data_offset": 0, 00:18:00.596 "data_size": 0 00:18:00.596 } 00:18:00.596 ] 00:18:00.596 }' 00:18:00.596 04:58:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.596 04:58:24 -- common/autotest_common.sh@10 -- # set +x 00:18:00.855 04:58:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:01.114 [2024-11-18 04:58:24.553821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:01.114 [2024-11-18 04:58:24.554065] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:18:01.114 [2024-11-18 04:58:24.554083] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:01.114 [2024-11-18 04:58:24.554274] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:01.114 [2024-11-18 04:58:24.554685] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:18:01.114 [2024-11-18 04:58:24.554725] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:18:01.114 BaseBdev4 00:18:01.114 [2024-11-18 04:58:24.554933] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.114 04:58:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:01.114 04:58:24 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:01.114 04:58:24 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:01.114 04:58:24 -- common/autotest_common.sh@899 -- # local i 00:18:01.114 04:58:24 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:01.114 04:58:24 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:01.114 04:58:24 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:01.372 04:58:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:01.630 [ 00:18:01.630 { 00:18:01.630 "name": "BaseBdev4", 00:18:01.630 "aliases": [ 00:18:01.630 "e0ccf81d-cf05-4bc4-af15-2e008f17795e" 00:18:01.630 ], 00:18:01.630 "product_name": "Malloc disk", 00:18:01.630 "block_size": 512, 00:18:01.630 "num_blocks": 65536, 00:18:01.630 "uuid": "e0ccf81d-cf05-4bc4-af15-2e008f17795e", 00:18:01.630 "assigned_rate_limits": { 00:18:01.630 "rw_ios_per_sec": 0, 00:18:01.630 "rw_mbytes_per_sec": 0, 00:18:01.630 "r_mbytes_per_sec": 0, 00:18:01.630 "w_mbytes_per_sec": 0 00:18:01.630 }, 00:18:01.630 "claimed": true, 00:18:01.630 "claim_type": "exclusive_write", 00:18:01.630 "zoned": false, 00:18:01.630 "supported_io_types": { 00:18:01.630 "read": true, 00:18:01.630 "write": true, 00:18:01.630 "unmap": true, 00:18:01.630 "write_zeroes": true, 00:18:01.630 "flush": true, 00:18:01.630 "reset": true, 00:18:01.630 "compare": false, 00:18:01.630 "compare_and_write": false, 00:18:01.630 "abort": true, 00:18:01.630 "nvme_admin": false, 00:18:01.630 "nvme_io": false 00:18:01.630 }, 00:18:01.630 "memory_domains": [ 00:18:01.630 { 00:18:01.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.630 "dma_device_type": 2 00:18:01.630 } 00:18:01.630 ], 00:18:01.630 "driver_specific": {} 00:18:01.630 } 00:18:01.630 ] 00:18:01.630 04:58:24 -- common/autotest_common.sh@905 -- # return 0 00:18:01.630 04:58:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:01.630 04:58:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:01.630 04:58:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:01.630 04:58:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.630 04:58:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:01.631 04:58:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:01.631 04:58:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.631 04:58:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:01.631 04:58:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.631 04:58:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.631 04:58:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.631 04:58:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.631 04:58:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.631 04:58:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.889 04:58:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.889 "name": "Existed_Raid", 00:18:01.889 "uuid": "446b38be-e050-41e8-9c0b-9a56849aad88", 00:18:01.889 "strip_size_kb": 64, 00:18:01.889 "state": "online", 00:18:01.889 "raid_level": "concat", 00:18:01.889 "superblock": true, 00:18:01.889 "num_base_bdevs": 4, 00:18:01.889 "num_base_bdevs_discovered": 4, 00:18:01.889 "num_base_bdevs_operational": 4, 00:18:01.889 "base_bdevs_list": [ 00:18:01.889 { 00:18:01.889 "name": "BaseBdev1", 00:18:01.889 "uuid": "8038e3ac-91a9-4e5b-ab0e-8b80b4d35fa2", 00:18:01.889 "is_configured": true, 00:18:01.889 "data_offset": 2048, 00:18:01.889 "data_size": 63488 00:18:01.889 }, 00:18:01.889 { 00:18:01.889 "name": "BaseBdev2", 00:18:01.889 "uuid": "45897934-c401-41f4-93b9-221f318ff183", 00:18:01.889 "is_configured": true, 00:18:01.889 "data_offset": 2048, 00:18:01.889 "data_size": 63488 00:18:01.889 }, 00:18:01.889 { 00:18:01.889 "name": "BaseBdev3", 00:18:01.889 "uuid": "a9c4063c-e3af-43bc-a5de-ef9d5f4822c0", 00:18:01.889 "is_configured": true, 00:18:01.889 "data_offset": 2048, 00:18:01.889 "data_size": 63488 00:18:01.889 }, 00:18:01.889 { 00:18:01.890 "name": "BaseBdev4", 00:18:01.890 "uuid": "e0ccf81d-cf05-4bc4-af15-2e008f17795e", 00:18:01.890 "is_configured": true, 00:18:01.890 "data_offset": 2048, 00:18:01.890 "data_size": 63488 00:18:01.890 } 00:18:01.890 ] 00:18:01.890 }' 00:18:01.890 04:58:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.890 04:58:25 -- common/autotest_common.sh@10 -- # set +x 00:18:02.147 04:58:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:02.147 [2024-11-18 04:58:25.642214] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.147 [2024-11-18 04:58:25.642417] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.147 [2024-11-18 04:58:25.642620] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.412 04:58:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.683 04:58:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:02.683 "name": "Existed_Raid", 00:18:02.683 "uuid": "446b38be-e050-41e8-9c0b-9a56849aad88", 00:18:02.683 "strip_size_kb": 64, 00:18:02.683 "state": "offline", 00:18:02.683 "raid_level": "concat", 00:18:02.683 "superblock": true, 00:18:02.683 "num_base_bdevs": 4, 00:18:02.683 "num_base_bdevs_discovered": 3, 00:18:02.683 "num_base_bdevs_operational": 3, 00:18:02.683 "base_bdevs_list": [ 00:18:02.683 { 00:18:02.683 "name": null, 00:18:02.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.683 "is_configured": false, 00:18:02.683 "data_offset": 2048, 00:18:02.683 "data_size": 63488 00:18:02.683 }, 00:18:02.683 { 00:18:02.683 "name": "BaseBdev2", 00:18:02.683 "uuid": "45897934-c401-41f4-93b9-221f318ff183", 00:18:02.683 "is_configured": true, 00:18:02.683 "data_offset": 2048, 00:18:02.683 "data_size": 63488 00:18:02.683 }, 00:18:02.683 { 00:18:02.683 "name": "BaseBdev3", 00:18:02.683 "uuid": "a9c4063c-e3af-43bc-a5de-ef9d5f4822c0", 00:18:02.683 "is_configured": true, 00:18:02.683 "data_offset": 2048, 00:18:02.683 "data_size": 63488 00:18:02.683 }, 00:18:02.683 { 00:18:02.683 "name": "BaseBdev4", 00:18:02.683 "uuid": "e0ccf81d-cf05-4bc4-af15-2e008f17795e", 00:18:02.683 "is_configured": true, 00:18:02.683 "data_offset": 2048, 00:18:02.683 "data_size": 63488 00:18:02.683 } 00:18:02.683 ] 00:18:02.683 }' 00:18:02.683 04:58:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:02.683 04:58:25 -- common/autotest_common.sh@10 -- # set +x 00:18:02.941 04:58:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:02.941 04:58:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:02.941 04:58:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.941 04:58:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:03.200 04:58:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:03.200 04:58:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.200 04:58:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:03.200 [2024-11-18 04:58:26.678645] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:03.459 04:58:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:03.459 04:58:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:03.459 04:58:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:03.459 04:58:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.459 04:58:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:03.459 04:58:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.459 04:58:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:03.717 [2024-11-18 04:58:27.229662] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:03.976 04:58:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:03.976 04:58:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:03.976 04:58:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.976 04:58:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:04.234 04:58:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:04.234 04:58:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.234 04:58:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:04.234 [2024-11-18 04:58:27.748984] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:04.234 [2024-11-18 04:58:27.749065] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:18:04.492 04:58:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:04.492 04:58:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:04.492 04:58:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:04.492 04:58:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.750 04:58:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:04.750 04:58:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:04.750 04:58:28 -- bdev/bdev_raid.sh@287 -- # killprocess 76080 00:18:04.750 04:58:28 -- common/autotest_common.sh@936 -- # '[' -z 76080 ']' 00:18:04.750 04:58:28 -- common/autotest_common.sh@940 -- # kill -0 76080 00:18:04.750 04:58:28 -- common/autotest_common.sh@941 -- # uname 00:18:04.750 04:58:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.750 04:58:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76080 00:18:04.750 killing process with pid 76080 00:18:04.750 04:58:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:04.750 04:58:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:04.750 04:58:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76080' 00:18:04.750 04:58:28 -- common/autotest_common.sh@955 -- # kill 76080 00:18:04.750 [2024-11-18 04:58:28.122823] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.750 04:58:28 -- common/autotest_common.sh@960 -- # wait 76080 00:18:04.750 [2024-11-18 04:58:28.122960] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.686 ************************************ 00:18:05.686 END TEST raid_state_function_test_sb 00:18:05.686 ************************************ 00:18:05.686 04:58:29 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:05.686 00:18:05.686 real 0m12.745s 00:18:05.686 user 0m21.452s 00:18:05.686 sys 0m1.804s 00:18:05.686 04:58:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:05.686 04:58:29 -- common/autotest_common.sh@10 -- # set +x 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:05.945 04:58:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:05.945 04:58:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:05.945 04:58:29 -- common/autotest_common.sh@10 -- # set +x 00:18:05.945 ************************************ 00:18:05.945 START TEST raid_superblock_test 00:18:05.945 ************************************ 00:18:05.945 04:58:29 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 4 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@357 -- # raid_pid=76488 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@358 -- # waitforlisten 76488 /var/tmp/spdk-raid.sock 00:18:05.945 04:58:29 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:05.945 04:58:29 -- common/autotest_common.sh@829 -- # '[' -z 76488 ']' 00:18:05.945 04:58:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:05.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:05.945 04:58:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.945 04:58:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:05.945 04:58:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.945 04:58:29 -- common/autotest_common.sh@10 -- # set +x 00:18:05.945 [2024-11-18 04:58:29.324738] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:05.945 [2024-11-18 04:58:29.325071] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76488 ] 00:18:06.203 [2024-11-18 04:58:29.497321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.203 [2024-11-18 04:58:29.715379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.462 [2024-11-18 04:58:29.900498] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.030 04:58:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.030 04:58:30 -- common/autotest_common.sh@862 -- # return 0 00:18:07.030 04:58:30 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:07.030 04:58:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:07.030 04:58:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:07.030 04:58:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:07.030 04:58:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:07.030 04:58:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:07.030 04:58:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:07.030 04:58:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:07.030 04:58:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:07.290 malloc1 00:18:07.290 04:58:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:07.549 [2024-11-18 04:58:30.893461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:07.549 [2024-11-18 04:58:30.893733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.549 [2024-11-18 04:58:30.893820] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:18:07.549 [2024-11-18 04:58:30.894060] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.549 [2024-11-18 04:58:30.896865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.549 [2024-11-18 04:58:30.897057] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:07.549 pt1 00:18:07.549 04:58:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:07.549 04:58:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:07.549 04:58:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:07.549 04:58:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:07.549 04:58:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:07.549 04:58:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:07.549 04:58:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:07.549 04:58:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:07.549 04:58:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:07.809 malloc2 00:18:07.809 04:58:31 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:08.068 [2024-11-18 04:58:31.404859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:08.068 [2024-11-18 04:58:31.404952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.068 [2024-11-18 04:58:31.404985] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:18:08.068 [2024-11-18 04:58:31.404997] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.068 [2024-11-18 04:58:31.407374] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.068 [2024-11-18 04:58:31.407413] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:08.068 pt2 00:18:08.068 04:58:31 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:08.068 04:58:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:08.068 04:58:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:08.068 04:58:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:08.068 04:58:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:08.068 04:58:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.068 04:58:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.068 04:58:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.068 04:58:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:08.327 malloc3 00:18:08.327 04:58:31 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:08.586 [2024-11-18 04:58:31.880722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:08.586 [2024-11-18 04:58:31.880809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.586 [2024-11-18 04:58:31.880840] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:18:08.586 [2024-11-18 04:58:31.880853] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.586 [2024-11-18 04:58:31.883253] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.586 [2024-11-18 04:58:31.883304] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:08.586 pt3 00:18:08.586 04:58:31 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:08.586 04:58:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:08.586 04:58:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:08.586 04:58:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:08.586 04:58:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:08.586 04:58:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.586 04:58:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.586 04:58:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.586 04:58:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:08.845 malloc4 00:18:08.845 04:58:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:09.104 [2024-11-18 04:58:32.409819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:09.104 [2024-11-18 04:58:32.409903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.104 [2024-11-18 04:58:32.409940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:18:09.104 [2024-11-18 04:58:32.409953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.104 [2024-11-18 04:58:32.413068] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.104 [2024-11-18 04:58:32.413111] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:09.104 pt4 00:18:09.104 04:58:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:09.104 04:58:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:09.104 04:58:32 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:09.104 [2024-11-18 04:58:32.614041] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:09.104 [2024-11-18 04:58:32.616254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.104 [2024-11-18 04:58:32.616536] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:09.104 [2024-11-18 04:58:32.616619] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:09.104 [2024-11-18 04:58:32.616903] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:18:09.104 [2024-11-18 04:58:32.616920] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:09.104 [2024-11-18 04:58:32.617073] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:09.104 [2024-11-18 04:58:32.617470] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:18:09.104 [2024-11-18 04:58:32.617526] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:18:09.104 [2024-11-18 04:58:32.617682] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.363 "name": "raid_bdev1", 00:18:09.363 "uuid": "cad865e1-0c69-4d7f-b0dc-192b9707883a", 00:18:09.363 "strip_size_kb": 64, 00:18:09.363 "state": "online", 00:18:09.363 "raid_level": "concat", 00:18:09.363 "superblock": true, 00:18:09.363 "num_base_bdevs": 4, 00:18:09.363 "num_base_bdevs_discovered": 4, 00:18:09.363 "num_base_bdevs_operational": 4, 00:18:09.363 "base_bdevs_list": [ 00:18:09.363 { 00:18:09.363 "name": "pt1", 00:18:09.363 "uuid": "8a7bb788-b1ce-5fb1-b382-fea349cf37a1", 00:18:09.363 "is_configured": true, 00:18:09.363 "data_offset": 2048, 00:18:09.363 "data_size": 63488 00:18:09.363 }, 00:18:09.363 { 00:18:09.363 "name": "pt2", 00:18:09.363 "uuid": "0f890a44-ba1d-557b-a1df-0a1613a529fa", 00:18:09.363 "is_configured": true, 00:18:09.363 "data_offset": 2048, 00:18:09.363 "data_size": 63488 00:18:09.363 }, 00:18:09.363 { 00:18:09.363 "name": "pt3", 00:18:09.363 "uuid": "9c3b8e50-9426-579c-8cf5-1e6119b6c691", 00:18:09.363 "is_configured": true, 00:18:09.363 "data_offset": 2048, 00:18:09.363 "data_size": 63488 00:18:09.363 }, 00:18:09.363 { 00:18:09.363 "name": "pt4", 00:18:09.363 "uuid": "cd5ff044-35e2-59e8-8848-0dfb1646ec0f", 00:18:09.363 "is_configured": true, 00:18:09.363 "data_offset": 2048, 00:18:09.363 "data_size": 63488 00:18:09.363 } 00:18:09.363 ] 00:18:09.363 }' 00:18:09.363 04:58:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.363 04:58:32 -- common/autotest_common.sh@10 -- # set +x 00:18:09.622 04:58:33 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:09.622 04:58:33 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:09.880 [2024-11-18 04:58:33.350509] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.880 04:58:33 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=cad865e1-0c69-4d7f-b0dc-192b9707883a 00:18:09.880 04:58:33 -- bdev/bdev_raid.sh@380 -- # '[' -z cad865e1-0c69-4d7f-b0dc-192b9707883a ']' 00:18:09.880 04:58:33 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:10.139 [2024-11-18 04:58:33.606270] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.139 [2024-11-18 04:58:33.606304] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.139 [2024-11-18 04:58:33.606381] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.139 [2024-11-18 04:58:33.606478] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.139 [2024-11-18 04:58:33.606494] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:18:10.139 04:58:33 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:10.139 04:58:33 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.399 04:58:33 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:10.399 04:58:33 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:10.399 04:58:33 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:10.399 04:58:33 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:10.658 04:58:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:10.658 04:58:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:10.917 04:58:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:10.917 04:58:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:11.176 04:58:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:11.176 04:58:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:11.435 04:58:34 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:11.435 04:58:34 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:11.694 04:58:34 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:11.694 04:58:34 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:11.694 04:58:34 -- common/autotest_common.sh@650 -- # local es=0 00:18:11.694 04:58:34 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:11.694 04:58:34 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.694 04:58:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.694 04:58:35 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.694 04:58:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.694 04:58:35 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.694 04:58:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.694 04:58:35 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.694 04:58:35 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:11.694 04:58:35 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:11.954 [2024-11-18 04:58:35.226673] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:11.954 [2024-11-18 04:58:35.228739] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:11.954 [2024-11-18 04:58:35.228831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:11.954 [2024-11-18 04:58:35.228876] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:11.954 [2024-11-18 04:58:35.228935] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:11.954 [2024-11-18 04:58:35.229011] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:11.954 [2024-11-18 04:58:35.229041] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:11.954 [2024-11-18 04:58:35.229064] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:11.954 [2024-11-18 04:58:35.229084] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.954 [2024-11-18 04:58:35.229095] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:18:11.954 request: 00:18:11.954 { 00:18:11.954 "name": "raid_bdev1", 00:18:11.954 "raid_level": "concat", 00:18:11.954 "base_bdevs": [ 00:18:11.954 "malloc1", 00:18:11.954 "malloc2", 00:18:11.954 "malloc3", 00:18:11.954 "malloc4" 00:18:11.954 ], 00:18:11.954 "superblock": false, 00:18:11.954 "strip_size_kb": 64, 00:18:11.954 "method": "bdev_raid_create", 00:18:11.954 "req_id": 1 00:18:11.954 } 00:18:11.954 Got JSON-RPC error response 00:18:11.954 response: 00:18:11.954 { 00:18:11.954 "code": -17, 00:18:11.954 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:11.954 } 00:18:11.954 04:58:35 -- common/autotest_common.sh@653 -- # es=1 00:18:11.954 04:58:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:11.954 04:58:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:11.954 04:58:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:11.954 04:58:35 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:11.954 04:58:35 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.214 04:58:35 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:12.214 04:58:35 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:12.214 04:58:35 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:12.214 [2024-11-18 04:58:35.722702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:12.214 [2024-11-18 04:58:35.722800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.214 [2024-11-18 04:58:35.722855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:18:12.214 [2024-11-18 04:58:35.722887] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.214 [2024-11-18 04:58:35.725617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.214 [2024-11-18 04:58:35.725657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:12.214 [2024-11-18 04:58:35.725774] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:12.214 [2024-11-18 04:58:35.725850] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:12.214 pt1 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.473 04:58:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.732 04:58:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.732 "name": "raid_bdev1", 00:18:12.732 "uuid": "cad865e1-0c69-4d7f-b0dc-192b9707883a", 00:18:12.732 "strip_size_kb": 64, 00:18:12.732 "state": "configuring", 00:18:12.732 "raid_level": "concat", 00:18:12.732 "superblock": true, 00:18:12.732 "num_base_bdevs": 4, 00:18:12.732 "num_base_bdevs_discovered": 1, 00:18:12.732 "num_base_bdevs_operational": 4, 00:18:12.732 "base_bdevs_list": [ 00:18:12.732 { 00:18:12.732 "name": "pt1", 00:18:12.732 "uuid": "8a7bb788-b1ce-5fb1-b382-fea349cf37a1", 00:18:12.732 "is_configured": true, 00:18:12.732 "data_offset": 2048, 00:18:12.732 "data_size": 63488 00:18:12.732 }, 00:18:12.732 { 00:18:12.732 "name": null, 00:18:12.733 "uuid": "0f890a44-ba1d-557b-a1df-0a1613a529fa", 00:18:12.733 "is_configured": false, 00:18:12.733 "data_offset": 2048, 00:18:12.733 "data_size": 63488 00:18:12.733 }, 00:18:12.733 { 00:18:12.733 "name": null, 00:18:12.733 "uuid": "9c3b8e50-9426-579c-8cf5-1e6119b6c691", 00:18:12.733 "is_configured": false, 00:18:12.733 "data_offset": 2048, 00:18:12.733 "data_size": 63488 00:18:12.733 }, 00:18:12.733 { 00:18:12.733 "name": null, 00:18:12.733 "uuid": "cd5ff044-35e2-59e8-8848-0dfb1646ec0f", 00:18:12.733 "is_configured": false, 00:18:12.733 "data_offset": 2048, 00:18:12.733 "data_size": 63488 00:18:12.733 } 00:18:12.733 ] 00:18:12.733 }' 00:18:12.733 04:58:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.733 04:58:36 -- common/autotest_common.sh@10 -- # set +x 00:18:12.992 04:58:36 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:12.992 04:58:36 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.992 [2024-11-18 04:58:36.506958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.992 [2024-11-18 04:58:36.507252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.992 [2024-11-18 04:58:36.507300] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:18:12.992 [2024-11-18 04:58:36.507316] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.992 [2024-11-18 04:58:36.507835] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.992 [2024-11-18 04:58:36.507857] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.992 [2024-11-18 04:58:36.507948] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:12.992 [2024-11-18 04:58:36.507973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.992 pt2 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:13.251 [2024-11-18 04:58:36.710976] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.251 04:58:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.509 04:58:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.509 "name": "raid_bdev1", 00:18:13.509 "uuid": "cad865e1-0c69-4d7f-b0dc-192b9707883a", 00:18:13.509 "strip_size_kb": 64, 00:18:13.509 "state": "configuring", 00:18:13.509 "raid_level": "concat", 00:18:13.509 "superblock": true, 00:18:13.509 "num_base_bdevs": 4, 00:18:13.509 "num_base_bdevs_discovered": 1, 00:18:13.509 "num_base_bdevs_operational": 4, 00:18:13.509 "base_bdevs_list": [ 00:18:13.509 { 00:18:13.509 "name": "pt1", 00:18:13.509 "uuid": "8a7bb788-b1ce-5fb1-b382-fea349cf37a1", 00:18:13.510 "is_configured": true, 00:18:13.510 "data_offset": 2048, 00:18:13.510 "data_size": 63488 00:18:13.510 }, 00:18:13.510 { 00:18:13.510 "name": null, 00:18:13.510 "uuid": "0f890a44-ba1d-557b-a1df-0a1613a529fa", 00:18:13.510 "is_configured": false, 00:18:13.510 "data_offset": 2048, 00:18:13.510 "data_size": 63488 00:18:13.510 }, 00:18:13.510 { 00:18:13.510 "name": null, 00:18:13.510 "uuid": "9c3b8e50-9426-579c-8cf5-1e6119b6c691", 00:18:13.510 "is_configured": false, 00:18:13.510 "data_offset": 2048, 00:18:13.510 "data_size": 63488 00:18:13.510 }, 00:18:13.510 { 00:18:13.510 "name": null, 00:18:13.510 "uuid": "cd5ff044-35e2-59e8-8848-0dfb1646ec0f", 00:18:13.510 "is_configured": false, 00:18:13.510 "data_offset": 2048, 00:18:13.510 "data_size": 63488 00:18:13.510 } 00:18:13.510 ] 00:18:13.510 }' 00:18:13.510 04:58:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.510 04:58:36 -- common/autotest_common.sh@10 -- # set +x 00:18:13.768 04:58:37 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:13.768 04:58:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:13.768 04:58:37 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:14.027 [2024-11-18 04:58:37.475234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:14.027 [2024-11-18 04:58:37.475345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.027 [2024-11-18 04:58:37.475388] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:18:14.027 [2024-11-18 04:58:37.475404] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.027 [2024-11-18 04:58:37.475891] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.027 [2024-11-18 04:58:37.475925] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:14.027 [2024-11-18 04:58:37.476032] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:14.027 [2024-11-18 04:58:37.476079] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.027 pt2 00:18:14.027 04:58:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:14.027 04:58:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:14.027 04:58:37 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:14.286 [2024-11-18 04:58:37.685529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:14.286 [2024-11-18 04:58:37.685610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.286 [2024-11-18 04:58:37.685636] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:18:14.286 [2024-11-18 04:58:37.685650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.286 [2024-11-18 04:58:37.686051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.286 [2024-11-18 04:58:37.686078] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:14.286 [2024-11-18 04:58:37.686157] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:14.286 [2024-11-18 04:58:37.686190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:14.286 pt3 00:18:14.286 04:58:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:14.286 04:58:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:14.286 04:58:37 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:14.545 [2024-11-18 04:58:37.895471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:14.545 [2024-11-18 04:58:37.895549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.545 [2024-11-18 04:58:37.895591] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:18:14.545 [2024-11-18 04:58:37.895607] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.545 [2024-11-18 04:58:37.896006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.546 [2024-11-18 04:58:37.896033] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:14.546 [2024-11-18 04:58:37.896111] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:14.546 [2024-11-18 04:58:37.896141] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:14.546 [2024-11-18 04:58:37.896318] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:18:14.546 [2024-11-18 04:58:37.896339] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:14.546 [2024-11-18 04:58:37.896437] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:14.546 [2024-11-18 04:58:37.896863] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:18:14.546 [2024-11-18 04:58:37.896878] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:18:14.546 [2024-11-18 04:58:37.897045] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.546 pt4 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.546 04:58:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.805 04:58:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.805 "name": "raid_bdev1", 00:18:14.805 "uuid": "cad865e1-0c69-4d7f-b0dc-192b9707883a", 00:18:14.805 "strip_size_kb": 64, 00:18:14.805 "state": "online", 00:18:14.805 "raid_level": "concat", 00:18:14.805 "superblock": true, 00:18:14.805 "num_base_bdevs": 4, 00:18:14.805 "num_base_bdevs_discovered": 4, 00:18:14.805 "num_base_bdevs_operational": 4, 00:18:14.805 "base_bdevs_list": [ 00:18:14.805 { 00:18:14.805 "name": "pt1", 00:18:14.805 "uuid": "8a7bb788-b1ce-5fb1-b382-fea349cf37a1", 00:18:14.805 "is_configured": true, 00:18:14.805 "data_offset": 2048, 00:18:14.805 "data_size": 63488 00:18:14.805 }, 00:18:14.805 { 00:18:14.805 "name": "pt2", 00:18:14.805 "uuid": "0f890a44-ba1d-557b-a1df-0a1613a529fa", 00:18:14.805 "is_configured": true, 00:18:14.805 "data_offset": 2048, 00:18:14.805 "data_size": 63488 00:18:14.805 }, 00:18:14.805 { 00:18:14.805 "name": "pt3", 00:18:14.805 "uuid": "9c3b8e50-9426-579c-8cf5-1e6119b6c691", 00:18:14.805 "is_configured": true, 00:18:14.805 "data_offset": 2048, 00:18:14.805 "data_size": 63488 00:18:14.805 }, 00:18:14.805 { 00:18:14.805 "name": "pt4", 00:18:14.805 "uuid": "cd5ff044-35e2-59e8-8848-0dfb1646ec0f", 00:18:14.805 "is_configured": true, 00:18:14.805 "data_offset": 2048, 00:18:14.805 "data_size": 63488 00:18:14.805 } 00:18:14.805 ] 00:18:14.805 }' 00:18:14.805 04:58:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.805 04:58:38 -- common/autotest_common.sh@10 -- # set +x 00:18:15.063 04:58:38 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:15.063 04:58:38 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:15.323 [2024-11-18 04:58:38.647914] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.323 04:58:38 -- bdev/bdev_raid.sh@430 -- # '[' cad865e1-0c69-4d7f-b0dc-192b9707883a '!=' cad865e1-0c69-4d7f-b0dc-192b9707883a ']' 00:18:15.323 04:58:38 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:15.323 04:58:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:15.323 04:58:38 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:15.323 04:58:38 -- bdev/bdev_raid.sh@511 -- # killprocess 76488 00:18:15.323 04:58:38 -- common/autotest_common.sh@936 -- # '[' -z 76488 ']' 00:18:15.323 04:58:38 -- common/autotest_common.sh@940 -- # kill -0 76488 00:18:15.323 04:58:38 -- common/autotest_common.sh@941 -- # uname 00:18:15.323 04:58:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.323 04:58:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76488 00:18:15.323 killing process with pid 76488 00:18:15.323 04:58:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:15.323 04:58:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:15.323 04:58:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76488' 00:18:15.323 04:58:38 -- common/autotest_common.sh@955 -- # kill 76488 00:18:15.323 [2024-11-18 04:58:38.699382] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.323 [2024-11-18 04:58:38.699457] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.323 04:58:38 -- common/autotest_common.sh@960 -- # wait 76488 00:18:15.323 [2024-11-18 04:58:38.699537] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.323 [2024-11-18 04:58:38.699567] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:18:15.582 [2024-11-18 04:58:38.981965] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.518 04:58:40 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:16.518 00:18:16.518 real 0m10.766s 00:18:16.518 user 0m17.881s 00:18:16.518 sys 0m1.514s 00:18:16.518 ************************************ 00:18:16.518 END TEST raid_superblock_test 00:18:16.518 ************************************ 00:18:16.518 04:58:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:16.518 04:58:40 -- common/autotest_common.sh@10 -- # set +x 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:16.777 04:58:40 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:16.777 04:58:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:16.777 04:58:40 -- common/autotest_common.sh@10 -- # set +x 00:18:16.777 ************************************ 00:18:16.777 START TEST raid_state_function_test 00:18:16.777 ************************************ 00:18:16.777 04:58:40 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 false 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:16.777 Process raid pid: 76779 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=76779 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 76779' 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:16.777 04:58:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 76779 /var/tmp/spdk-raid.sock 00:18:16.777 04:58:40 -- common/autotest_common.sh@829 -- # '[' -z 76779 ']' 00:18:16.777 04:58:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:16.777 04:58:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.777 04:58:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:16.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:16.777 04:58:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.777 04:58:40 -- common/autotest_common.sh@10 -- # set +x 00:18:16.777 [2024-11-18 04:58:40.148943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:16.777 [2024-11-18 04:58:40.149360] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.037 [2024-11-18 04:58:40.322555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.037 [2024-11-18 04:58:40.507311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.296 [2024-11-18 04:58:40.676044] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.883 04:58:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.883 04:58:41 -- common/autotest_common.sh@862 -- # return 0 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:17.883 [2024-11-18 04:58:41.323141] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:17.883 [2024-11-18 04:58:41.323264] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:17.883 [2024-11-18 04:58:41.323280] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.883 [2024-11-18 04:58:41.323294] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.883 [2024-11-18 04:58:41.323302] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:17.883 [2024-11-18 04:58:41.323313] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:17.883 [2024-11-18 04:58:41.323320] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:17.883 [2024-11-18 04:58:41.323332] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.883 04:58:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.156 04:58:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.156 "name": "Existed_Raid", 00:18:18.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.156 "strip_size_kb": 0, 00:18:18.156 "state": "configuring", 00:18:18.156 "raid_level": "raid1", 00:18:18.156 "superblock": false, 00:18:18.156 "num_base_bdevs": 4, 00:18:18.156 "num_base_bdevs_discovered": 0, 00:18:18.156 "num_base_bdevs_operational": 4, 00:18:18.156 "base_bdevs_list": [ 00:18:18.157 { 00:18:18.157 "name": "BaseBdev1", 00:18:18.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.157 "is_configured": false, 00:18:18.157 "data_offset": 0, 00:18:18.157 "data_size": 0 00:18:18.157 }, 00:18:18.157 { 00:18:18.157 "name": "BaseBdev2", 00:18:18.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.157 "is_configured": false, 00:18:18.157 "data_offset": 0, 00:18:18.157 "data_size": 0 00:18:18.157 }, 00:18:18.157 { 00:18:18.157 "name": "BaseBdev3", 00:18:18.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.157 "is_configured": false, 00:18:18.157 "data_offset": 0, 00:18:18.157 "data_size": 0 00:18:18.157 }, 00:18:18.157 { 00:18:18.157 "name": "BaseBdev4", 00:18:18.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.157 "is_configured": false, 00:18:18.157 "data_offset": 0, 00:18:18.157 "data_size": 0 00:18:18.157 } 00:18:18.157 ] 00:18:18.157 }' 00:18:18.157 04:58:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.157 04:58:41 -- common/autotest_common.sh@10 -- # set +x 00:18:18.417 04:58:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:18.675 [2024-11-18 04:58:42.091247] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.675 [2024-11-18 04:58:42.091336] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:18:18.675 04:58:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:18.935 [2024-11-18 04:58:42.299399] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.935 [2024-11-18 04:58:42.299486] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.935 [2024-11-18 04:58:42.299499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.935 [2024-11-18 04:58:42.299512] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.935 [2024-11-18 04:58:42.299520] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:18.935 [2024-11-18 04:58:42.299531] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:18.935 [2024-11-18 04:58:42.299538] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:18.935 [2024-11-18 04:58:42.299549] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:18.935 04:58:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:19.194 [2024-11-18 04:58:42.582817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.194 BaseBdev1 00:18:19.194 04:58:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:19.194 04:58:42 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:19.194 04:58:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:19.194 04:58:42 -- common/autotest_common.sh@899 -- # local i 00:18:19.194 04:58:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:19.194 04:58:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:19.194 04:58:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:19.453 04:58:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.713 [ 00:18:19.713 { 00:18:19.713 "name": "BaseBdev1", 00:18:19.713 "aliases": [ 00:18:19.713 "b489588a-ff7f-4f9a-ab29-2f0ba99f68a3" 00:18:19.713 ], 00:18:19.713 "product_name": "Malloc disk", 00:18:19.713 "block_size": 512, 00:18:19.713 "num_blocks": 65536, 00:18:19.713 "uuid": "b489588a-ff7f-4f9a-ab29-2f0ba99f68a3", 00:18:19.713 "assigned_rate_limits": { 00:18:19.713 "rw_ios_per_sec": 0, 00:18:19.713 "rw_mbytes_per_sec": 0, 00:18:19.713 "r_mbytes_per_sec": 0, 00:18:19.713 "w_mbytes_per_sec": 0 00:18:19.713 }, 00:18:19.713 "claimed": true, 00:18:19.713 "claim_type": "exclusive_write", 00:18:19.713 "zoned": false, 00:18:19.713 "supported_io_types": { 00:18:19.713 "read": true, 00:18:19.713 "write": true, 00:18:19.713 "unmap": true, 00:18:19.713 "write_zeroes": true, 00:18:19.713 "flush": true, 00:18:19.713 "reset": true, 00:18:19.713 "compare": false, 00:18:19.713 "compare_and_write": false, 00:18:19.713 "abort": true, 00:18:19.713 "nvme_admin": false, 00:18:19.713 "nvme_io": false 00:18:19.713 }, 00:18:19.713 "memory_domains": [ 00:18:19.713 { 00:18:19.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.713 "dma_device_type": 2 00:18:19.713 } 00:18:19.713 ], 00:18:19.713 "driver_specific": {} 00:18:19.713 } 00:18:19.713 ] 00:18:19.713 04:58:43 -- common/autotest_common.sh@905 -- # return 0 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.713 04:58:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.972 04:58:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:19.972 "name": "Existed_Raid", 00:18:19.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.972 "strip_size_kb": 0, 00:18:19.972 "state": "configuring", 00:18:19.972 "raid_level": "raid1", 00:18:19.972 "superblock": false, 00:18:19.972 "num_base_bdevs": 4, 00:18:19.972 "num_base_bdevs_discovered": 1, 00:18:19.972 "num_base_bdevs_operational": 4, 00:18:19.972 "base_bdevs_list": [ 00:18:19.972 { 00:18:19.972 "name": "BaseBdev1", 00:18:19.972 "uuid": "b489588a-ff7f-4f9a-ab29-2f0ba99f68a3", 00:18:19.972 "is_configured": true, 00:18:19.972 "data_offset": 0, 00:18:19.972 "data_size": 65536 00:18:19.972 }, 00:18:19.972 { 00:18:19.972 "name": "BaseBdev2", 00:18:19.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.972 "is_configured": false, 00:18:19.972 "data_offset": 0, 00:18:19.972 "data_size": 0 00:18:19.972 }, 00:18:19.972 { 00:18:19.972 "name": "BaseBdev3", 00:18:19.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.972 "is_configured": false, 00:18:19.972 "data_offset": 0, 00:18:19.972 "data_size": 0 00:18:19.972 }, 00:18:19.972 { 00:18:19.972 "name": "BaseBdev4", 00:18:19.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.972 "is_configured": false, 00:18:19.972 "data_offset": 0, 00:18:19.972 "data_size": 0 00:18:19.972 } 00:18:19.972 ] 00:18:19.972 }' 00:18:19.972 04:58:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:19.973 04:58:43 -- common/autotest_common.sh@10 -- # set +x 00:18:20.232 04:58:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:20.232 [2024-11-18 04:58:43.747218] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.232 [2024-11-18 04:58:43.747499] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:20.490 [2024-11-18 04:58:43.951438] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.490 [2024-11-18 04:58:43.953833] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.490 [2024-11-18 04:58:43.954023] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.490 [2024-11-18 04:58:43.954143] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.490 [2024-11-18 04:58:43.954232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.490 [2024-11-18 04:58:43.954422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:20.490 [2024-11-18 04:58:43.954486] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.490 04:58:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.748 04:58:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.748 "name": "Existed_Raid", 00:18:20.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.748 "strip_size_kb": 0, 00:18:20.748 "state": "configuring", 00:18:20.748 "raid_level": "raid1", 00:18:20.748 "superblock": false, 00:18:20.748 "num_base_bdevs": 4, 00:18:20.748 "num_base_bdevs_discovered": 1, 00:18:20.748 "num_base_bdevs_operational": 4, 00:18:20.748 "base_bdevs_list": [ 00:18:20.748 { 00:18:20.748 "name": "BaseBdev1", 00:18:20.748 "uuid": "b489588a-ff7f-4f9a-ab29-2f0ba99f68a3", 00:18:20.748 "is_configured": true, 00:18:20.748 "data_offset": 0, 00:18:20.748 "data_size": 65536 00:18:20.748 }, 00:18:20.748 { 00:18:20.748 "name": "BaseBdev2", 00:18:20.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.748 "is_configured": false, 00:18:20.748 "data_offset": 0, 00:18:20.748 "data_size": 0 00:18:20.748 }, 00:18:20.748 { 00:18:20.748 "name": "BaseBdev3", 00:18:20.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.748 "is_configured": false, 00:18:20.748 "data_offset": 0, 00:18:20.748 "data_size": 0 00:18:20.748 }, 00:18:20.748 { 00:18:20.748 "name": "BaseBdev4", 00:18:20.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.748 "is_configured": false, 00:18:20.748 "data_offset": 0, 00:18:20.748 "data_size": 0 00:18:20.748 } 00:18:20.748 ] 00:18:20.748 }' 00:18:20.748 04:58:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.748 04:58:44 -- common/autotest_common.sh@10 -- # set +x 00:18:21.006 04:58:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:21.265 BaseBdev2 00:18:21.265 [2024-11-18 04:58:44.733372] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.265 04:58:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:21.265 04:58:44 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:21.265 04:58:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:21.265 04:58:44 -- common/autotest_common.sh@899 -- # local i 00:18:21.265 04:58:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:21.265 04:58:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:21.265 04:58:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:21.524 04:58:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.783 [ 00:18:21.783 { 00:18:21.783 "name": "BaseBdev2", 00:18:21.783 "aliases": [ 00:18:21.783 "3c6fc1af-0e1c-4468-9cd0-f6773aa9bcb5" 00:18:21.783 ], 00:18:21.783 "product_name": "Malloc disk", 00:18:21.783 "block_size": 512, 00:18:21.783 "num_blocks": 65536, 00:18:21.783 "uuid": "3c6fc1af-0e1c-4468-9cd0-f6773aa9bcb5", 00:18:21.783 "assigned_rate_limits": { 00:18:21.783 "rw_ios_per_sec": 0, 00:18:21.783 "rw_mbytes_per_sec": 0, 00:18:21.783 "r_mbytes_per_sec": 0, 00:18:21.783 "w_mbytes_per_sec": 0 00:18:21.783 }, 00:18:21.783 "claimed": true, 00:18:21.783 "claim_type": "exclusive_write", 00:18:21.783 "zoned": false, 00:18:21.783 "supported_io_types": { 00:18:21.783 "read": true, 00:18:21.783 "write": true, 00:18:21.783 "unmap": true, 00:18:21.783 "write_zeroes": true, 00:18:21.783 "flush": true, 00:18:21.783 "reset": true, 00:18:21.783 "compare": false, 00:18:21.783 "compare_and_write": false, 00:18:21.783 "abort": true, 00:18:21.783 "nvme_admin": false, 00:18:21.783 "nvme_io": false 00:18:21.783 }, 00:18:21.783 "memory_domains": [ 00:18:21.783 { 00:18:21.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.783 "dma_device_type": 2 00:18:21.783 } 00:18:21.783 ], 00:18:21.783 "driver_specific": {} 00:18:21.783 } 00:18:21.783 ] 00:18:21.783 04:58:45 -- common/autotest_common.sh@905 -- # return 0 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.783 04:58:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.042 04:58:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.042 "name": "Existed_Raid", 00:18:22.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.042 "strip_size_kb": 0, 00:18:22.042 "state": "configuring", 00:18:22.042 "raid_level": "raid1", 00:18:22.042 "superblock": false, 00:18:22.042 "num_base_bdevs": 4, 00:18:22.042 "num_base_bdevs_discovered": 2, 00:18:22.042 "num_base_bdevs_operational": 4, 00:18:22.042 "base_bdevs_list": [ 00:18:22.042 { 00:18:22.042 "name": "BaseBdev1", 00:18:22.042 "uuid": "b489588a-ff7f-4f9a-ab29-2f0ba99f68a3", 00:18:22.042 "is_configured": true, 00:18:22.043 "data_offset": 0, 00:18:22.043 "data_size": 65536 00:18:22.043 }, 00:18:22.043 { 00:18:22.043 "name": "BaseBdev2", 00:18:22.043 "uuid": "3c6fc1af-0e1c-4468-9cd0-f6773aa9bcb5", 00:18:22.043 "is_configured": true, 00:18:22.043 "data_offset": 0, 00:18:22.043 "data_size": 65536 00:18:22.043 }, 00:18:22.043 { 00:18:22.043 "name": "BaseBdev3", 00:18:22.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.043 "is_configured": false, 00:18:22.043 "data_offset": 0, 00:18:22.043 "data_size": 0 00:18:22.043 }, 00:18:22.043 { 00:18:22.043 "name": "BaseBdev4", 00:18:22.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.043 "is_configured": false, 00:18:22.043 "data_offset": 0, 00:18:22.043 "data_size": 0 00:18:22.043 } 00:18:22.043 ] 00:18:22.043 }' 00:18:22.043 04:58:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.043 04:58:45 -- common/autotest_common.sh@10 -- # set +x 00:18:22.302 04:58:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:22.562 [2024-11-18 04:58:45.964867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:22.562 BaseBdev3 00:18:22.562 04:58:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:22.562 04:58:45 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:22.562 04:58:45 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:22.562 04:58:45 -- common/autotest_common.sh@899 -- # local i 00:18:22.562 04:58:45 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:22.562 04:58:45 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:22.562 04:58:45 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.821 04:58:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:23.081 [ 00:18:23.081 { 00:18:23.081 "name": "BaseBdev3", 00:18:23.081 "aliases": [ 00:18:23.081 "9ea3cf8f-b923-4224-8ea0-165685d3706f" 00:18:23.081 ], 00:18:23.081 "product_name": "Malloc disk", 00:18:23.081 "block_size": 512, 00:18:23.081 "num_blocks": 65536, 00:18:23.081 "uuid": "9ea3cf8f-b923-4224-8ea0-165685d3706f", 00:18:23.081 "assigned_rate_limits": { 00:18:23.081 "rw_ios_per_sec": 0, 00:18:23.081 "rw_mbytes_per_sec": 0, 00:18:23.081 "r_mbytes_per_sec": 0, 00:18:23.081 "w_mbytes_per_sec": 0 00:18:23.081 }, 00:18:23.081 "claimed": true, 00:18:23.081 "claim_type": "exclusive_write", 00:18:23.081 "zoned": false, 00:18:23.081 "supported_io_types": { 00:18:23.081 "read": true, 00:18:23.081 "write": true, 00:18:23.081 "unmap": true, 00:18:23.081 "write_zeroes": true, 00:18:23.081 "flush": true, 00:18:23.081 "reset": true, 00:18:23.081 "compare": false, 00:18:23.081 "compare_and_write": false, 00:18:23.081 "abort": true, 00:18:23.081 "nvme_admin": false, 00:18:23.081 "nvme_io": false 00:18:23.081 }, 00:18:23.081 "memory_domains": [ 00:18:23.081 { 00:18:23.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.081 "dma_device_type": 2 00:18:23.081 } 00:18:23.081 ], 00:18:23.081 "driver_specific": {} 00:18:23.081 } 00:18:23.081 ] 00:18:23.081 04:58:46 -- common/autotest_common.sh@905 -- # return 0 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.081 04:58:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.340 04:58:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:23.340 "name": "Existed_Raid", 00:18:23.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.340 "strip_size_kb": 0, 00:18:23.340 "state": "configuring", 00:18:23.340 "raid_level": "raid1", 00:18:23.340 "superblock": false, 00:18:23.340 "num_base_bdevs": 4, 00:18:23.340 "num_base_bdevs_discovered": 3, 00:18:23.340 "num_base_bdevs_operational": 4, 00:18:23.340 "base_bdevs_list": [ 00:18:23.340 { 00:18:23.340 "name": "BaseBdev1", 00:18:23.340 "uuid": "b489588a-ff7f-4f9a-ab29-2f0ba99f68a3", 00:18:23.340 "is_configured": true, 00:18:23.340 "data_offset": 0, 00:18:23.340 "data_size": 65536 00:18:23.340 }, 00:18:23.340 { 00:18:23.340 "name": "BaseBdev2", 00:18:23.340 "uuid": "3c6fc1af-0e1c-4468-9cd0-f6773aa9bcb5", 00:18:23.340 "is_configured": true, 00:18:23.340 "data_offset": 0, 00:18:23.340 "data_size": 65536 00:18:23.340 }, 00:18:23.340 { 00:18:23.340 "name": "BaseBdev3", 00:18:23.340 "uuid": "9ea3cf8f-b923-4224-8ea0-165685d3706f", 00:18:23.340 "is_configured": true, 00:18:23.340 "data_offset": 0, 00:18:23.340 "data_size": 65536 00:18:23.340 }, 00:18:23.340 { 00:18:23.340 "name": "BaseBdev4", 00:18:23.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.341 "is_configured": false, 00:18:23.341 "data_offset": 0, 00:18:23.341 "data_size": 0 00:18:23.341 } 00:18:23.341 ] 00:18:23.341 }' 00:18:23.341 04:58:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:23.341 04:58:46 -- common/autotest_common.sh@10 -- # set +x 00:18:23.600 04:58:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:23.859 [2024-11-18 04:58:47.236137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:23.859 [2024-11-18 04:58:47.236499] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:18:23.859 [2024-11-18 04:58:47.236553] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:23.859 [2024-11-18 04:58:47.236812] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:23.859 [2024-11-18 04:58:47.237318] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:18:23.859 [2024-11-18 04:58:47.237525] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:18:23.859 [2024-11-18 04:58:47.238015] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.859 BaseBdev4 00:18:23.859 04:58:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:23.859 04:58:47 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:23.859 04:58:47 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:23.859 04:58:47 -- common/autotest_common.sh@899 -- # local i 00:18:23.859 04:58:47 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:23.859 04:58:47 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:23.859 04:58:47 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.118 04:58:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:24.377 [ 00:18:24.377 { 00:18:24.377 "name": "BaseBdev4", 00:18:24.377 "aliases": [ 00:18:24.377 "393d5782-6632-47c1-9da2-40dec2a68685" 00:18:24.377 ], 00:18:24.377 "product_name": "Malloc disk", 00:18:24.377 "block_size": 512, 00:18:24.377 "num_blocks": 65536, 00:18:24.377 "uuid": "393d5782-6632-47c1-9da2-40dec2a68685", 00:18:24.377 "assigned_rate_limits": { 00:18:24.377 "rw_ios_per_sec": 0, 00:18:24.377 "rw_mbytes_per_sec": 0, 00:18:24.377 "r_mbytes_per_sec": 0, 00:18:24.377 "w_mbytes_per_sec": 0 00:18:24.377 }, 00:18:24.377 "claimed": true, 00:18:24.377 "claim_type": "exclusive_write", 00:18:24.377 "zoned": false, 00:18:24.377 "supported_io_types": { 00:18:24.377 "read": true, 00:18:24.377 "write": true, 00:18:24.377 "unmap": true, 00:18:24.377 "write_zeroes": true, 00:18:24.377 "flush": true, 00:18:24.377 "reset": true, 00:18:24.377 "compare": false, 00:18:24.377 "compare_and_write": false, 00:18:24.377 "abort": true, 00:18:24.377 "nvme_admin": false, 00:18:24.377 "nvme_io": false 00:18:24.377 }, 00:18:24.377 "memory_domains": [ 00:18:24.377 { 00:18:24.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.377 "dma_device_type": 2 00:18:24.377 } 00:18:24.377 ], 00:18:24.377 "driver_specific": {} 00:18:24.377 } 00:18:24.377 ] 00:18:24.377 04:58:47 -- common/autotest_common.sh@905 -- # return 0 00:18:24.377 04:58:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:24.377 04:58:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:24.377 04:58:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:24.377 04:58:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:24.377 04:58:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:24.377 04:58:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:24.377 04:58:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:24.377 04:58:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:24.377 04:58:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.378 04:58:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.378 04:58:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.378 04:58:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.378 04:58:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.378 04:58:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.637 04:58:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.637 "name": "Existed_Raid", 00:18:24.637 "uuid": "5f2747e2-2af2-4da7-9fcc-31a33776f5cd", 00:18:24.637 "strip_size_kb": 0, 00:18:24.637 "state": "online", 00:18:24.637 "raid_level": "raid1", 00:18:24.637 "superblock": false, 00:18:24.637 "num_base_bdevs": 4, 00:18:24.637 "num_base_bdevs_discovered": 4, 00:18:24.637 "num_base_bdevs_operational": 4, 00:18:24.637 "base_bdevs_list": [ 00:18:24.637 { 00:18:24.637 "name": "BaseBdev1", 00:18:24.637 "uuid": "b489588a-ff7f-4f9a-ab29-2f0ba99f68a3", 00:18:24.637 "is_configured": true, 00:18:24.637 "data_offset": 0, 00:18:24.637 "data_size": 65536 00:18:24.637 }, 00:18:24.637 { 00:18:24.637 "name": "BaseBdev2", 00:18:24.637 "uuid": "3c6fc1af-0e1c-4468-9cd0-f6773aa9bcb5", 00:18:24.637 "is_configured": true, 00:18:24.637 "data_offset": 0, 00:18:24.637 "data_size": 65536 00:18:24.637 }, 00:18:24.637 { 00:18:24.637 "name": "BaseBdev3", 00:18:24.637 "uuid": "9ea3cf8f-b923-4224-8ea0-165685d3706f", 00:18:24.637 "is_configured": true, 00:18:24.637 "data_offset": 0, 00:18:24.637 "data_size": 65536 00:18:24.637 }, 00:18:24.637 { 00:18:24.637 "name": "BaseBdev4", 00:18:24.637 "uuid": "393d5782-6632-47c1-9da2-40dec2a68685", 00:18:24.637 "is_configured": true, 00:18:24.637 "data_offset": 0, 00:18:24.637 "data_size": 65536 00:18:24.637 } 00:18:24.637 ] 00:18:24.637 }' 00:18:24.637 04:58:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.637 04:58:47 -- common/autotest_common.sh@10 -- # set +x 00:18:24.896 04:58:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:25.156 [2024-11-18 04:58:48.452708] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.156 04:58:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.415 04:58:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.415 "name": "Existed_Raid", 00:18:25.415 "uuid": "5f2747e2-2af2-4da7-9fcc-31a33776f5cd", 00:18:25.415 "strip_size_kb": 0, 00:18:25.415 "state": "online", 00:18:25.415 "raid_level": "raid1", 00:18:25.415 "superblock": false, 00:18:25.415 "num_base_bdevs": 4, 00:18:25.415 "num_base_bdevs_discovered": 3, 00:18:25.415 "num_base_bdevs_operational": 3, 00:18:25.415 "base_bdevs_list": [ 00:18:25.415 { 00:18:25.415 "name": null, 00:18:25.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.415 "is_configured": false, 00:18:25.415 "data_offset": 0, 00:18:25.415 "data_size": 65536 00:18:25.415 }, 00:18:25.415 { 00:18:25.415 "name": "BaseBdev2", 00:18:25.415 "uuid": "3c6fc1af-0e1c-4468-9cd0-f6773aa9bcb5", 00:18:25.415 "is_configured": true, 00:18:25.415 "data_offset": 0, 00:18:25.415 "data_size": 65536 00:18:25.415 }, 00:18:25.415 { 00:18:25.415 "name": "BaseBdev3", 00:18:25.415 "uuid": "9ea3cf8f-b923-4224-8ea0-165685d3706f", 00:18:25.415 "is_configured": true, 00:18:25.415 "data_offset": 0, 00:18:25.415 "data_size": 65536 00:18:25.415 }, 00:18:25.415 { 00:18:25.415 "name": "BaseBdev4", 00:18:25.415 "uuid": "393d5782-6632-47c1-9da2-40dec2a68685", 00:18:25.415 "is_configured": true, 00:18:25.415 "data_offset": 0, 00:18:25.415 "data_size": 65536 00:18:25.415 } 00:18:25.415 ] 00:18:25.415 }' 00:18:25.415 04:58:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.415 04:58:48 -- common/autotest_common.sh@10 -- # set +x 00:18:25.675 04:58:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:25.675 04:58:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:25.675 04:58:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.675 04:58:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:25.933 04:58:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:25.933 04:58:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:25.933 04:58:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:26.191 [2024-11-18 04:58:49.513476] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:26.191 04:58:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:26.191 04:58:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:26.191 04:58:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.191 04:58:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:26.450 04:58:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:26.450 04:58:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:26.450 04:58:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:26.709 [2024-11-18 04:58:50.066239] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:26.709 04:58:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:26.709 04:58:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:26.709 04:58:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.709 04:58:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:26.968 04:58:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:26.968 04:58:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:26.968 04:58:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:27.227 [2024-11-18 04:58:50.576013] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:27.227 [2024-11-18 04:58:50.576048] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.227 [2024-11-18 04:58:50.576102] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.227 [2024-11-18 04:58:50.650619] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.227 [2024-11-18 04:58:50.650662] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:18:27.227 04:58:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:27.227 04:58:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:27.227 04:58:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.227 04:58:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:27.487 04:58:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:27.487 04:58:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:27.487 04:58:50 -- bdev/bdev_raid.sh@287 -- # killprocess 76779 00:18:27.487 04:58:50 -- common/autotest_common.sh@936 -- # '[' -z 76779 ']' 00:18:27.487 04:58:50 -- common/autotest_common.sh@940 -- # kill -0 76779 00:18:27.487 04:58:50 -- common/autotest_common.sh@941 -- # uname 00:18:27.487 04:58:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:27.487 04:58:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76779 00:18:27.487 killing process with pid 76779 00:18:27.487 04:58:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:27.487 04:58:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:27.487 04:58:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76779' 00:18:27.487 04:58:50 -- common/autotest_common.sh@955 -- # kill 76779 00:18:27.487 04:58:50 -- common/autotest_common.sh@960 -- # wait 76779 00:18:27.487 [2024-11-18 04:58:50.907305] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.487 [2024-11-18 04:58:50.907819] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:28.866 00:18:28.866 real 0m11.957s 00:18:28.866 user 0m19.978s 00:18:28.866 sys 0m1.756s 00:18:28.866 04:58:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:28.866 04:58:52 -- common/autotest_common.sh@10 -- # set +x 00:18:28.866 ************************************ 00:18:28.866 END TEST raid_state_function_test 00:18:28.866 ************************************ 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:28.866 04:58:52 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:28.866 04:58:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:28.866 04:58:52 -- common/autotest_common.sh@10 -- # set +x 00:18:28.866 ************************************ 00:18:28.866 START TEST raid_state_function_test_sb 00:18:28.866 ************************************ 00:18:28.866 04:58:52 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 true 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=77173 00:18:28.866 Process raid pid: 77173 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 77173' 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:28.866 04:58:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 77173 /var/tmp/spdk-raid.sock 00:18:28.866 04:58:52 -- common/autotest_common.sh@829 -- # '[' -z 77173 ']' 00:18:28.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:28.866 04:58:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:28.866 04:58:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.866 04:58:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:28.866 04:58:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.866 04:58:52 -- common/autotest_common.sh@10 -- # set +x 00:18:28.866 [2024-11-18 04:58:52.191212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:28.866 [2024-11-18 04:58:52.191449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.866 [2024-11-18 04:58:52.384798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.125 [2024-11-18 04:58:52.618167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.384 [2024-11-18 04:58:52.779102] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.953 04:58:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.953 04:58:53 -- common/autotest_common.sh@862 -- # return 0 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:29.953 [2024-11-18 04:58:53.385168] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:29.953 [2024-11-18 04:58:53.385279] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:29.953 [2024-11-18 04:58:53.385296] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:29.953 [2024-11-18 04:58:53.385311] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:29.953 [2024-11-18 04:58:53.385319] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:29.953 [2024-11-18 04:58:53.385331] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:29.953 [2024-11-18 04:58:53.385339] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:29.953 [2024-11-18 04:58:53.385366] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.953 04:58:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.212 04:58:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.212 "name": "Existed_Raid", 00:18:30.212 "uuid": "566ba5c7-8ea2-4961-936a-75d1c89c3d59", 00:18:30.212 "strip_size_kb": 0, 00:18:30.212 "state": "configuring", 00:18:30.212 "raid_level": "raid1", 00:18:30.212 "superblock": true, 00:18:30.212 "num_base_bdevs": 4, 00:18:30.212 "num_base_bdevs_discovered": 0, 00:18:30.212 "num_base_bdevs_operational": 4, 00:18:30.212 "base_bdevs_list": [ 00:18:30.212 { 00:18:30.212 "name": "BaseBdev1", 00:18:30.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.212 "is_configured": false, 00:18:30.212 "data_offset": 0, 00:18:30.212 "data_size": 0 00:18:30.212 }, 00:18:30.212 { 00:18:30.212 "name": "BaseBdev2", 00:18:30.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.212 "is_configured": false, 00:18:30.212 "data_offset": 0, 00:18:30.212 "data_size": 0 00:18:30.212 }, 00:18:30.212 { 00:18:30.212 "name": "BaseBdev3", 00:18:30.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.212 "is_configured": false, 00:18:30.212 "data_offset": 0, 00:18:30.212 "data_size": 0 00:18:30.212 }, 00:18:30.212 { 00:18:30.212 "name": "BaseBdev4", 00:18:30.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.212 "is_configured": false, 00:18:30.212 "data_offset": 0, 00:18:30.212 "data_size": 0 00:18:30.212 } 00:18:30.212 ] 00:18:30.212 }' 00:18:30.212 04:58:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.212 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:18:30.471 04:58:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:30.730 [2024-11-18 04:58:54.185222] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:30.730 [2024-11-18 04:58:54.185285] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:18:30.730 04:58:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:30.990 [2024-11-18 04:58:54.397351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:30.990 [2024-11-18 04:58:54.397423] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:30.990 [2024-11-18 04:58:54.397437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.990 [2024-11-18 04:58:54.397452] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.990 [2024-11-18 04:58:54.397460] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:30.990 [2024-11-18 04:58:54.397473] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:30.990 [2024-11-18 04:58:54.397481] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:30.990 [2024-11-18 04:58:54.397493] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:30.990 04:58:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:31.249 [2024-11-18 04:58:54.642671] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.249 BaseBdev1 00:18:31.249 04:58:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:31.249 04:58:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:31.249 04:58:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:31.249 04:58:54 -- common/autotest_common.sh@899 -- # local i 00:18:31.249 04:58:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:31.249 04:58:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:31.249 04:58:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.507 04:58:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:31.766 [ 00:18:31.766 { 00:18:31.766 "name": "BaseBdev1", 00:18:31.766 "aliases": [ 00:18:31.766 "aad7ed14-6df8-4482-87c9-cd430c80b758" 00:18:31.766 ], 00:18:31.766 "product_name": "Malloc disk", 00:18:31.766 "block_size": 512, 00:18:31.766 "num_blocks": 65536, 00:18:31.766 "uuid": "aad7ed14-6df8-4482-87c9-cd430c80b758", 00:18:31.766 "assigned_rate_limits": { 00:18:31.766 "rw_ios_per_sec": 0, 00:18:31.766 "rw_mbytes_per_sec": 0, 00:18:31.766 "r_mbytes_per_sec": 0, 00:18:31.766 "w_mbytes_per_sec": 0 00:18:31.766 }, 00:18:31.766 "claimed": true, 00:18:31.766 "claim_type": "exclusive_write", 00:18:31.766 "zoned": false, 00:18:31.766 "supported_io_types": { 00:18:31.766 "read": true, 00:18:31.766 "write": true, 00:18:31.767 "unmap": true, 00:18:31.767 "write_zeroes": true, 00:18:31.767 "flush": true, 00:18:31.767 "reset": true, 00:18:31.767 "compare": false, 00:18:31.767 "compare_and_write": false, 00:18:31.767 "abort": true, 00:18:31.767 "nvme_admin": false, 00:18:31.767 "nvme_io": false 00:18:31.767 }, 00:18:31.767 "memory_domains": [ 00:18:31.767 { 00:18:31.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.767 "dma_device_type": 2 00:18:31.767 } 00:18:31.767 ], 00:18:31.767 "driver_specific": {} 00:18:31.767 } 00:18:31.767 ] 00:18:31.767 04:58:55 -- common/autotest_common.sh@905 -- # return 0 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.767 04:58:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.025 04:58:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.025 "name": "Existed_Raid", 00:18:32.025 "uuid": "f02b68dd-3c59-470b-8c6a-6d967f538a39", 00:18:32.026 "strip_size_kb": 0, 00:18:32.026 "state": "configuring", 00:18:32.026 "raid_level": "raid1", 00:18:32.026 "superblock": true, 00:18:32.026 "num_base_bdevs": 4, 00:18:32.026 "num_base_bdevs_discovered": 1, 00:18:32.026 "num_base_bdevs_operational": 4, 00:18:32.026 "base_bdevs_list": [ 00:18:32.026 { 00:18:32.026 "name": "BaseBdev1", 00:18:32.026 "uuid": "aad7ed14-6df8-4482-87c9-cd430c80b758", 00:18:32.026 "is_configured": true, 00:18:32.026 "data_offset": 2048, 00:18:32.026 "data_size": 63488 00:18:32.026 }, 00:18:32.026 { 00:18:32.026 "name": "BaseBdev2", 00:18:32.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.026 "is_configured": false, 00:18:32.026 "data_offset": 0, 00:18:32.026 "data_size": 0 00:18:32.026 }, 00:18:32.026 { 00:18:32.026 "name": "BaseBdev3", 00:18:32.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.026 "is_configured": false, 00:18:32.026 "data_offset": 0, 00:18:32.026 "data_size": 0 00:18:32.026 }, 00:18:32.026 { 00:18:32.026 "name": "BaseBdev4", 00:18:32.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.026 "is_configured": false, 00:18:32.026 "data_offset": 0, 00:18:32.026 "data_size": 0 00:18:32.026 } 00:18:32.026 ] 00:18:32.026 }' 00:18:32.026 04:58:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.026 04:58:55 -- common/autotest_common.sh@10 -- # set +x 00:18:32.285 04:58:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:32.544 [2024-11-18 04:58:55.819123] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:32.544 [2024-11-18 04:58:55.819175] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:18:32.544 04:58:55 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:32.544 04:58:55 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:32.802 04:58:56 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:33.062 BaseBdev1 00:18:33.062 04:58:56 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:33.062 04:58:56 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:33.062 04:58:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:33.062 04:58:56 -- common/autotest_common.sh@899 -- # local i 00:18:33.062 04:58:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:33.062 04:58:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:33.062 04:58:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.320 04:58:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:33.580 [ 00:18:33.580 { 00:18:33.580 "name": "BaseBdev1", 00:18:33.580 "aliases": [ 00:18:33.580 "5dd967fb-d3eb-4ee6-8c7b-7d3f0d7685d1" 00:18:33.580 ], 00:18:33.580 "product_name": "Malloc disk", 00:18:33.580 "block_size": 512, 00:18:33.580 "num_blocks": 65536, 00:18:33.580 "uuid": "5dd967fb-d3eb-4ee6-8c7b-7d3f0d7685d1", 00:18:33.580 "assigned_rate_limits": { 00:18:33.580 "rw_ios_per_sec": 0, 00:18:33.580 "rw_mbytes_per_sec": 0, 00:18:33.580 "r_mbytes_per_sec": 0, 00:18:33.580 "w_mbytes_per_sec": 0 00:18:33.580 }, 00:18:33.580 "claimed": false, 00:18:33.580 "zoned": false, 00:18:33.580 "supported_io_types": { 00:18:33.580 "read": true, 00:18:33.580 "write": true, 00:18:33.580 "unmap": true, 00:18:33.580 "write_zeroes": true, 00:18:33.580 "flush": true, 00:18:33.580 "reset": true, 00:18:33.580 "compare": false, 00:18:33.580 "compare_and_write": false, 00:18:33.580 "abort": true, 00:18:33.580 "nvme_admin": false, 00:18:33.580 "nvme_io": false 00:18:33.580 }, 00:18:33.580 "memory_domains": [ 00:18:33.580 { 00:18:33.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.580 "dma_device_type": 2 00:18:33.580 } 00:18:33.580 ], 00:18:33.580 "driver_specific": {} 00:18:33.580 } 00:18:33.580 ] 00:18:33.580 04:58:56 -- common/autotest_common.sh@905 -- # return 0 00:18:33.580 04:58:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:33.580 [2024-11-18 04:58:57.071436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.580 [2024-11-18 04:58:57.073578] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.580 [2024-11-18 04:58:57.073647] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.580 [2024-11-18 04:58:57.073677] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:33.580 [2024-11-18 04:58:57.073693] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:33.580 [2024-11-18 04:58:57.073702] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:33.580 [2024-11-18 04:58:57.073718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.580 04:58:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.838 04:58:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.838 "name": "Existed_Raid", 00:18:33.838 "uuid": "8d45fba0-30b2-456d-ab5a-0e568baa06b3", 00:18:33.838 "strip_size_kb": 0, 00:18:33.838 "state": "configuring", 00:18:33.838 "raid_level": "raid1", 00:18:33.838 "superblock": true, 00:18:33.838 "num_base_bdevs": 4, 00:18:33.838 "num_base_bdevs_discovered": 1, 00:18:33.838 "num_base_bdevs_operational": 4, 00:18:33.838 "base_bdevs_list": [ 00:18:33.838 { 00:18:33.838 "name": "BaseBdev1", 00:18:33.838 "uuid": "5dd967fb-d3eb-4ee6-8c7b-7d3f0d7685d1", 00:18:33.839 "is_configured": true, 00:18:33.839 "data_offset": 2048, 00:18:33.839 "data_size": 63488 00:18:33.839 }, 00:18:33.839 { 00:18:33.839 "name": "BaseBdev2", 00:18:33.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.839 "is_configured": false, 00:18:33.839 "data_offset": 0, 00:18:33.839 "data_size": 0 00:18:33.839 }, 00:18:33.839 { 00:18:33.839 "name": "BaseBdev3", 00:18:33.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.839 "is_configured": false, 00:18:33.839 "data_offset": 0, 00:18:33.839 "data_size": 0 00:18:33.839 }, 00:18:33.839 { 00:18:33.839 "name": "BaseBdev4", 00:18:33.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.839 "is_configured": false, 00:18:33.839 "data_offset": 0, 00:18:33.839 "data_size": 0 00:18:33.839 } 00:18:33.839 ] 00:18:33.839 }' 00:18:33.839 04:58:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.839 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:18:34.406 04:58:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:34.406 [2024-11-18 04:58:57.901120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.406 BaseBdev2 00:18:34.406 04:58:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:34.406 04:58:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:34.406 04:58:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:34.406 04:58:57 -- common/autotest_common.sh@899 -- # local i 00:18:34.406 04:58:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:34.406 04:58:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:34.406 04:58:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:34.667 04:58:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:34.944 [ 00:18:34.944 { 00:18:34.944 "name": "BaseBdev2", 00:18:34.944 "aliases": [ 00:18:34.944 "65684c3b-c80d-40af-b15f-5e237dba61f1" 00:18:34.944 ], 00:18:34.944 "product_name": "Malloc disk", 00:18:34.944 "block_size": 512, 00:18:34.944 "num_blocks": 65536, 00:18:34.944 "uuid": "65684c3b-c80d-40af-b15f-5e237dba61f1", 00:18:34.944 "assigned_rate_limits": { 00:18:34.944 "rw_ios_per_sec": 0, 00:18:34.944 "rw_mbytes_per_sec": 0, 00:18:34.944 "r_mbytes_per_sec": 0, 00:18:34.944 "w_mbytes_per_sec": 0 00:18:34.944 }, 00:18:34.944 "claimed": true, 00:18:34.944 "claim_type": "exclusive_write", 00:18:34.944 "zoned": false, 00:18:34.944 "supported_io_types": { 00:18:34.944 "read": true, 00:18:34.944 "write": true, 00:18:34.944 "unmap": true, 00:18:34.944 "write_zeroes": true, 00:18:34.944 "flush": true, 00:18:34.944 "reset": true, 00:18:34.944 "compare": false, 00:18:34.944 "compare_and_write": false, 00:18:34.944 "abort": true, 00:18:34.944 "nvme_admin": false, 00:18:34.944 "nvme_io": false 00:18:34.944 }, 00:18:34.944 "memory_domains": [ 00:18:34.944 { 00:18:34.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.944 "dma_device_type": 2 00:18:34.944 } 00:18:34.944 ], 00:18:34.944 "driver_specific": {} 00:18:34.944 } 00:18:34.944 ] 00:18:34.944 04:58:58 -- common/autotest_common.sh@905 -- # return 0 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.944 04:58:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.213 04:58:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.213 "name": "Existed_Raid", 00:18:35.213 "uuid": "8d45fba0-30b2-456d-ab5a-0e568baa06b3", 00:18:35.213 "strip_size_kb": 0, 00:18:35.213 "state": "configuring", 00:18:35.213 "raid_level": "raid1", 00:18:35.213 "superblock": true, 00:18:35.213 "num_base_bdevs": 4, 00:18:35.213 "num_base_bdevs_discovered": 2, 00:18:35.213 "num_base_bdevs_operational": 4, 00:18:35.213 "base_bdevs_list": [ 00:18:35.213 { 00:18:35.213 "name": "BaseBdev1", 00:18:35.213 "uuid": "5dd967fb-d3eb-4ee6-8c7b-7d3f0d7685d1", 00:18:35.213 "is_configured": true, 00:18:35.213 "data_offset": 2048, 00:18:35.213 "data_size": 63488 00:18:35.213 }, 00:18:35.213 { 00:18:35.213 "name": "BaseBdev2", 00:18:35.213 "uuid": "65684c3b-c80d-40af-b15f-5e237dba61f1", 00:18:35.213 "is_configured": true, 00:18:35.213 "data_offset": 2048, 00:18:35.213 "data_size": 63488 00:18:35.213 }, 00:18:35.213 { 00:18:35.213 "name": "BaseBdev3", 00:18:35.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.213 "is_configured": false, 00:18:35.213 "data_offset": 0, 00:18:35.213 "data_size": 0 00:18:35.213 }, 00:18:35.213 { 00:18:35.213 "name": "BaseBdev4", 00:18:35.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.213 "is_configured": false, 00:18:35.213 "data_offset": 0, 00:18:35.213 "data_size": 0 00:18:35.213 } 00:18:35.213 ] 00:18:35.213 }' 00:18:35.213 04:58:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.213 04:58:58 -- common/autotest_common.sh@10 -- # set +x 00:18:35.472 04:58:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:35.731 [2024-11-18 04:58:59.088368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:35.731 BaseBdev3 00:18:35.731 04:58:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:35.731 04:58:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:35.731 04:58:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:35.731 04:58:59 -- common/autotest_common.sh@899 -- # local i 00:18:35.731 04:58:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:35.731 04:58:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:35.731 04:58:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.989 04:58:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:36.246 [ 00:18:36.246 { 00:18:36.246 "name": "BaseBdev3", 00:18:36.246 "aliases": [ 00:18:36.246 "4a990bdf-3fa5-4e6e-96c3-86b98dfcd4fb" 00:18:36.246 ], 00:18:36.246 "product_name": "Malloc disk", 00:18:36.246 "block_size": 512, 00:18:36.246 "num_blocks": 65536, 00:18:36.246 "uuid": "4a990bdf-3fa5-4e6e-96c3-86b98dfcd4fb", 00:18:36.246 "assigned_rate_limits": { 00:18:36.246 "rw_ios_per_sec": 0, 00:18:36.246 "rw_mbytes_per_sec": 0, 00:18:36.246 "r_mbytes_per_sec": 0, 00:18:36.246 "w_mbytes_per_sec": 0 00:18:36.246 }, 00:18:36.246 "claimed": true, 00:18:36.246 "claim_type": "exclusive_write", 00:18:36.246 "zoned": false, 00:18:36.246 "supported_io_types": { 00:18:36.246 "read": true, 00:18:36.246 "write": true, 00:18:36.246 "unmap": true, 00:18:36.246 "write_zeroes": true, 00:18:36.246 "flush": true, 00:18:36.246 "reset": true, 00:18:36.246 "compare": false, 00:18:36.246 "compare_and_write": false, 00:18:36.246 "abort": true, 00:18:36.246 "nvme_admin": false, 00:18:36.246 "nvme_io": false 00:18:36.246 }, 00:18:36.246 "memory_domains": [ 00:18:36.246 { 00:18:36.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.246 "dma_device_type": 2 00:18:36.246 } 00:18:36.246 ], 00:18:36.246 "driver_specific": {} 00:18:36.246 } 00:18:36.246 ] 00:18:36.246 04:58:59 -- common/autotest_common.sh@905 -- # return 0 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.246 04:58:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.504 04:58:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.504 "name": "Existed_Raid", 00:18:36.504 "uuid": "8d45fba0-30b2-456d-ab5a-0e568baa06b3", 00:18:36.504 "strip_size_kb": 0, 00:18:36.504 "state": "configuring", 00:18:36.504 "raid_level": "raid1", 00:18:36.504 "superblock": true, 00:18:36.504 "num_base_bdevs": 4, 00:18:36.504 "num_base_bdevs_discovered": 3, 00:18:36.504 "num_base_bdevs_operational": 4, 00:18:36.504 "base_bdevs_list": [ 00:18:36.504 { 00:18:36.504 "name": "BaseBdev1", 00:18:36.504 "uuid": "5dd967fb-d3eb-4ee6-8c7b-7d3f0d7685d1", 00:18:36.504 "is_configured": true, 00:18:36.504 "data_offset": 2048, 00:18:36.504 "data_size": 63488 00:18:36.504 }, 00:18:36.504 { 00:18:36.504 "name": "BaseBdev2", 00:18:36.504 "uuid": "65684c3b-c80d-40af-b15f-5e237dba61f1", 00:18:36.505 "is_configured": true, 00:18:36.505 "data_offset": 2048, 00:18:36.505 "data_size": 63488 00:18:36.505 }, 00:18:36.505 { 00:18:36.505 "name": "BaseBdev3", 00:18:36.505 "uuid": "4a990bdf-3fa5-4e6e-96c3-86b98dfcd4fb", 00:18:36.505 "is_configured": true, 00:18:36.505 "data_offset": 2048, 00:18:36.505 "data_size": 63488 00:18:36.505 }, 00:18:36.505 { 00:18:36.505 "name": "BaseBdev4", 00:18:36.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.505 "is_configured": false, 00:18:36.505 "data_offset": 0, 00:18:36.505 "data_size": 0 00:18:36.505 } 00:18:36.505 ] 00:18:36.505 }' 00:18:36.505 04:58:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.505 04:58:59 -- common/autotest_common.sh@10 -- # set +x 00:18:36.763 04:59:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:37.022 [2024-11-18 04:59:00.406556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:37.022 [2024-11-18 04:59:00.406826] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:18:37.022 [2024-11-18 04:59:00.406845] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:37.022 [2024-11-18 04:59:00.406970] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:37.022 [2024-11-18 04:59:00.407357] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:18:37.022 [2024-11-18 04:59:00.407381] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:18:37.022 BaseBdev4 00:18:37.022 [2024-11-18 04:59:00.407547] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.022 04:59:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:37.022 04:59:00 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:37.022 04:59:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:37.022 04:59:00 -- common/autotest_common.sh@899 -- # local i 00:18:37.022 04:59:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:37.022 04:59:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:37.022 04:59:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.281 04:59:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:37.540 [ 00:18:37.540 { 00:18:37.540 "name": "BaseBdev4", 00:18:37.540 "aliases": [ 00:18:37.540 "62fc6ecf-6e33-4bb8-b9cb-046fc2b01c42" 00:18:37.540 ], 00:18:37.540 "product_name": "Malloc disk", 00:18:37.540 "block_size": 512, 00:18:37.540 "num_blocks": 65536, 00:18:37.540 "uuid": "62fc6ecf-6e33-4bb8-b9cb-046fc2b01c42", 00:18:37.540 "assigned_rate_limits": { 00:18:37.540 "rw_ios_per_sec": 0, 00:18:37.540 "rw_mbytes_per_sec": 0, 00:18:37.540 "r_mbytes_per_sec": 0, 00:18:37.540 "w_mbytes_per_sec": 0 00:18:37.540 }, 00:18:37.540 "claimed": true, 00:18:37.540 "claim_type": "exclusive_write", 00:18:37.540 "zoned": false, 00:18:37.540 "supported_io_types": { 00:18:37.540 "read": true, 00:18:37.540 "write": true, 00:18:37.540 "unmap": true, 00:18:37.540 "write_zeroes": true, 00:18:37.540 "flush": true, 00:18:37.540 "reset": true, 00:18:37.540 "compare": false, 00:18:37.540 "compare_and_write": false, 00:18:37.540 "abort": true, 00:18:37.540 "nvme_admin": false, 00:18:37.540 "nvme_io": false 00:18:37.540 }, 00:18:37.540 "memory_domains": [ 00:18:37.540 { 00:18:37.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.540 "dma_device_type": 2 00:18:37.540 } 00:18:37.540 ], 00:18:37.540 "driver_specific": {} 00:18:37.540 } 00:18:37.540 ] 00:18:37.540 04:59:00 -- common/autotest_common.sh@905 -- # return 0 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.540 04:59:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.799 04:59:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.799 "name": "Existed_Raid", 00:18:37.799 "uuid": "8d45fba0-30b2-456d-ab5a-0e568baa06b3", 00:18:37.799 "strip_size_kb": 0, 00:18:37.799 "state": "online", 00:18:37.799 "raid_level": "raid1", 00:18:37.799 "superblock": true, 00:18:37.799 "num_base_bdevs": 4, 00:18:37.799 "num_base_bdevs_discovered": 4, 00:18:37.799 "num_base_bdevs_operational": 4, 00:18:37.799 "base_bdevs_list": [ 00:18:37.799 { 00:18:37.799 "name": "BaseBdev1", 00:18:37.799 "uuid": "5dd967fb-d3eb-4ee6-8c7b-7d3f0d7685d1", 00:18:37.799 "is_configured": true, 00:18:37.799 "data_offset": 2048, 00:18:37.799 "data_size": 63488 00:18:37.799 }, 00:18:37.799 { 00:18:37.799 "name": "BaseBdev2", 00:18:37.799 "uuid": "65684c3b-c80d-40af-b15f-5e237dba61f1", 00:18:37.799 "is_configured": true, 00:18:37.799 "data_offset": 2048, 00:18:37.799 "data_size": 63488 00:18:37.799 }, 00:18:37.799 { 00:18:37.799 "name": "BaseBdev3", 00:18:37.799 "uuid": "4a990bdf-3fa5-4e6e-96c3-86b98dfcd4fb", 00:18:37.799 "is_configured": true, 00:18:37.799 "data_offset": 2048, 00:18:37.799 "data_size": 63488 00:18:37.799 }, 00:18:37.799 { 00:18:37.799 "name": "BaseBdev4", 00:18:37.799 "uuid": "62fc6ecf-6e33-4bb8-b9cb-046fc2b01c42", 00:18:37.799 "is_configured": true, 00:18:37.799 "data_offset": 2048, 00:18:37.799 "data_size": 63488 00:18:37.799 } 00:18:37.799 ] 00:18:37.799 }' 00:18:37.799 04:59:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.799 04:59:01 -- common/autotest_common.sh@10 -- # set +x 00:18:38.058 04:59:01 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:38.317 [2024-11-18 04:59:01.679032] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.317 04:59:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.576 04:59:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:38.576 "name": "Existed_Raid", 00:18:38.576 "uuid": "8d45fba0-30b2-456d-ab5a-0e568baa06b3", 00:18:38.576 "strip_size_kb": 0, 00:18:38.576 "state": "online", 00:18:38.576 "raid_level": "raid1", 00:18:38.576 "superblock": true, 00:18:38.576 "num_base_bdevs": 4, 00:18:38.576 "num_base_bdevs_discovered": 3, 00:18:38.576 "num_base_bdevs_operational": 3, 00:18:38.576 "base_bdevs_list": [ 00:18:38.576 { 00:18:38.576 "name": null, 00:18:38.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.576 "is_configured": false, 00:18:38.576 "data_offset": 2048, 00:18:38.576 "data_size": 63488 00:18:38.576 }, 00:18:38.576 { 00:18:38.576 "name": "BaseBdev2", 00:18:38.576 "uuid": "65684c3b-c80d-40af-b15f-5e237dba61f1", 00:18:38.576 "is_configured": true, 00:18:38.576 "data_offset": 2048, 00:18:38.576 "data_size": 63488 00:18:38.576 }, 00:18:38.576 { 00:18:38.576 "name": "BaseBdev3", 00:18:38.576 "uuid": "4a990bdf-3fa5-4e6e-96c3-86b98dfcd4fb", 00:18:38.576 "is_configured": true, 00:18:38.576 "data_offset": 2048, 00:18:38.576 "data_size": 63488 00:18:38.576 }, 00:18:38.576 { 00:18:38.576 "name": "BaseBdev4", 00:18:38.576 "uuid": "62fc6ecf-6e33-4bb8-b9cb-046fc2b01c42", 00:18:38.576 "is_configured": true, 00:18:38.576 "data_offset": 2048, 00:18:38.576 "data_size": 63488 00:18:38.576 } 00:18:38.576 ] 00:18:38.576 }' 00:18:38.576 04:59:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:38.576 04:59:02 -- common/autotest_common.sh@10 -- # set +x 00:18:39.143 04:59:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:39.143 04:59:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:39.143 04:59:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.143 04:59:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:39.143 04:59:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:39.143 04:59:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:39.143 04:59:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:39.402 [2024-11-18 04:59:02.772794] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:39.402 04:59:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:39.402 04:59:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:39.402 04:59:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.402 04:59:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:39.660 04:59:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:39.660 04:59:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:39.660 04:59:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:39.919 [2024-11-18 04:59:03.326441] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:39.919 04:59:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:39.919 04:59:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:39.919 04:59:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.919 04:59:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:40.177 04:59:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:40.177 04:59:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:40.177 04:59:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:40.435 [2024-11-18 04:59:03.855959] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:40.435 [2024-11-18 04:59:03.856203] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.435 [2024-11-18 04:59:03.856405] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.435 [2024-11-18 04:59:03.938351] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.435 [2024-11-18 04:59:03.938601] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:18:40.435 04:59:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:40.435 04:59:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:40.693 04:59:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.693 04:59:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:40.693 04:59:04 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:40.693 04:59:04 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:40.693 04:59:04 -- bdev/bdev_raid.sh@287 -- # killprocess 77173 00:18:40.693 04:59:04 -- common/autotest_common.sh@936 -- # '[' -z 77173 ']' 00:18:40.693 04:59:04 -- common/autotest_common.sh@940 -- # kill -0 77173 00:18:40.693 04:59:04 -- common/autotest_common.sh@941 -- # uname 00:18:40.693 04:59:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.693 04:59:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77173 00:18:40.693 killing process with pid 77173 00:18:40.693 04:59:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:40.693 04:59:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:40.693 04:59:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77173' 00:18:40.693 04:59:04 -- common/autotest_common.sh@955 -- # kill 77173 00:18:40.693 04:59:04 -- common/autotest_common.sh@960 -- # wait 77173 00:18:40.693 [2024-11-18 04:59:04.211725] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:40.693 [2024-11-18 04:59:04.211842] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:42.068 00:18:42.068 real 0m13.268s 00:18:42.068 user 0m22.350s 00:18:42.068 sys 0m1.840s 00:18:42.068 ************************************ 00:18:42.068 END TEST raid_state_function_test_sb 00:18:42.068 ************************************ 00:18:42.068 04:59:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:42.068 04:59:05 -- common/autotest_common.sh@10 -- # set +x 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:42.068 04:59:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:42.068 04:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.068 04:59:05 -- common/autotest_common.sh@10 -- # set +x 00:18:42.068 ************************************ 00:18:42.068 START TEST raid_superblock_test 00:18:42.068 ************************************ 00:18:42.068 04:59:05 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 4 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:42.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@357 -- # raid_pid=77585 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@358 -- # waitforlisten 77585 /var/tmp/spdk-raid.sock 00:18:42.068 04:59:05 -- common/autotest_common.sh@829 -- # '[' -z 77585 ']' 00:18:42.068 04:59:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:42.068 04:59:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.068 04:59:05 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:42.068 04:59:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:42.068 04:59:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.068 04:59:05 -- common/autotest_common.sh@10 -- # set +x 00:18:42.068 [2024-11-18 04:59:05.476603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:42.068 [2024-11-18 04:59:05.476934] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77585 ] 00:18:42.326 [2024-11-18 04:59:05.650811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.583 [2024-11-18 04:59:05.870444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.583 [2024-11-18 04:59:06.049791] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:42.950 04:59:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.950 04:59:06 -- common/autotest_common.sh@862 -- # return 0 00:18:42.950 04:59:06 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:42.950 04:59:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:42.950 04:59:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:42.950 04:59:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:42.950 04:59:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:42.950 04:59:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:42.950 04:59:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:42.950 04:59:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:42.950 04:59:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:43.208 malloc1 00:18:43.208 04:59:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:43.466 [2024-11-18 04:59:06.873419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:43.466 [2024-11-18 04:59:06.873507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.466 [2024-11-18 04:59:06.873551] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:18:43.466 [2024-11-18 04:59:06.873567] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.466 [2024-11-18 04:59:06.876076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.466 [2024-11-18 04:59:06.876129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:43.466 pt1 00:18:43.466 04:59:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:43.466 04:59:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:43.466 04:59:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:43.466 04:59:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:43.466 04:59:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:43.466 04:59:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:43.466 04:59:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:43.466 04:59:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:43.466 04:59:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:43.725 malloc2 00:18:43.725 04:59:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:43.983 [2024-11-18 04:59:07.344879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:43.983 [2024-11-18 04:59:07.344954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.983 [2024-11-18 04:59:07.344989] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:18:43.983 [2024-11-18 04:59:07.345004] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.983 [2024-11-18 04:59:07.347481] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.983 [2024-11-18 04:59:07.347526] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:43.983 pt2 00:18:43.983 04:59:07 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:43.983 04:59:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:43.983 04:59:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:43.983 04:59:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:43.983 04:59:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:43.983 04:59:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:43.983 04:59:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:43.983 04:59:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:43.983 04:59:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:44.241 malloc3 00:18:44.241 04:59:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:44.499 [2024-11-18 04:59:07.912003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:44.499 [2024-11-18 04:59:07.912077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.499 [2024-11-18 04:59:07.912114] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:18:44.499 [2024-11-18 04:59:07.912130] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.499 [2024-11-18 04:59:07.914590] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.499 [2024-11-18 04:59:07.914635] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:44.499 pt3 00:18:44.499 04:59:07 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:44.499 04:59:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:44.499 04:59:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:44.499 04:59:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:44.499 04:59:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:44.499 04:59:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:44.499 04:59:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:44.499 04:59:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:44.499 04:59:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:44.758 malloc4 00:18:44.758 04:59:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:45.016 [2024-11-18 04:59:08.415043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:45.016 [2024-11-18 04:59:08.415320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.016 [2024-11-18 04:59:08.415410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:18:45.016 [2024-11-18 04:59:08.415679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.016 [2024-11-18 04:59:08.418302] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.016 [2024-11-18 04:59:08.418347] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:45.016 pt4 00:18:45.016 04:59:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:45.016 04:59:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:45.016 04:59:08 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:45.274 [2024-11-18 04:59:08.635344] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:45.274 [2024-11-18 04:59:08.637426] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:45.274 [2024-11-18 04:59:08.637514] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:45.274 [2024-11-18 04:59:08.637612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:45.274 [2024-11-18 04:59:08.637866] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:18:45.274 [2024-11-18 04:59:08.637884] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:45.274 [2024-11-18 04:59:08.638024] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:45.274 [2024-11-18 04:59:08.638462] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:18:45.274 [2024-11-18 04:59:08.638484] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:18:45.274 [2024-11-18 04:59:08.638665] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.274 04:59:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.532 04:59:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.532 "name": "raid_bdev1", 00:18:45.532 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:45.532 "strip_size_kb": 0, 00:18:45.532 "state": "online", 00:18:45.532 "raid_level": "raid1", 00:18:45.532 "superblock": true, 00:18:45.532 "num_base_bdevs": 4, 00:18:45.532 "num_base_bdevs_discovered": 4, 00:18:45.532 "num_base_bdevs_operational": 4, 00:18:45.532 "base_bdevs_list": [ 00:18:45.532 { 00:18:45.532 "name": "pt1", 00:18:45.532 "uuid": "67773b9f-5982-58b7-8ce0-d71b4ad77ecb", 00:18:45.532 "is_configured": true, 00:18:45.532 "data_offset": 2048, 00:18:45.532 "data_size": 63488 00:18:45.532 }, 00:18:45.532 { 00:18:45.532 "name": "pt2", 00:18:45.532 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:45.532 "is_configured": true, 00:18:45.532 "data_offset": 2048, 00:18:45.532 "data_size": 63488 00:18:45.532 }, 00:18:45.532 { 00:18:45.532 "name": "pt3", 00:18:45.532 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:45.532 "is_configured": true, 00:18:45.532 "data_offset": 2048, 00:18:45.532 "data_size": 63488 00:18:45.532 }, 00:18:45.532 { 00:18:45.532 "name": "pt4", 00:18:45.532 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:45.532 "is_configured": true, 00:18:45.532 "data_offset": 2048, 00:18:45.532 "data_size": 63488 00:18:45.532 } 00:18:45.532 ] 00:18:45.532 }' 00:18:45.532 04:59:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.532 04:59:08 -- common/autotest_common.sh@10 -- # set +x 00:18:45.812 04:59:09 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:45.812 04:59:09 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:46.073 [2024-11-18 04:59:09.415735] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.073 04:59:09 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f81ccf36-7442-473a-b3ed-34bb04acc439 00:18:46.073 04:59:09 -- bdev/bdev_raid.sh@380 -- # '[' -z f81ccf36-7442-473a-b3ed-34bb04acc439 ']' 00:18:46.073 04:59:09 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:46.331 [2024-11-18 04:59:09.631478] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.331 [2024-11-18 04:59:09.631520] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.331 [2024-11-18 04:59:09.631609] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.331 [2024-11-18 04:59:09.631716] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.331 [2024-11-18 04:59:09.631731] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:18:46.331 04:59:09 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.331 04:59:09 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:46.590 04:59:09 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:46.590 04:59:09 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:46.590 04:59:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:46.590 04:59:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:46.590 04:59:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:46.590 04:59:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:46.849 04:59:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:46.849 04:59:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:47.108 04:59:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:47.108 04:59:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:47.367 04:59:10 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:47.367 04:59:10 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:47.626 04:59:10 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:47.626 04:59:10 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:47.626 04:59:10 -- common/autotest_common.sh@650 -- # local es=0 00:18:47.626 04:59:10 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:47.626 04:59:10 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.626 04:59:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.626 04:59:10 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.626 04:59:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.626 04:59:10 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.626 04:59:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.626 04:59:10 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.626 04:59:10 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:47.626 04:59:10 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:47.885 [2024-11-18 04:59:11.159834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:47.885 [2024-11-18 04:59:11.161864] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:47.885 [2024-11-18 04:59:11.161926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:47.885 [2024-11-18 04:59:11.161970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:47.885 [2024-11-18 04:59:11.162031] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:47.885 [2024-11-18 04:59:11.162108] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:47.885 [2024-11-18 04:59:11.162141] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:47.885 [2024-11-18 04:59:11.162166] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:47.885 [2024-11-18 04:59:11.162187] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.885 [2024-11-18 04:59:11.162216] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:18:47.885 request: 00:18:47.885 { 00:18:47.885 "name": "raid_bdev1", 00:18:47.885 "raid_level": "raid1", 00:18:47.885 "base_bdevs": [ 00:18:47.885 "malloc1", 00:18:47.885 "malloc2", 00:18:47.885 "malloc3", 00:18:47.885 "malloc4" 00:18:47.885 ], 00:18:47.885 "superblock": false, 00:18:47.885 "method": "bdev_raid_create", 00:18:47.885 "req_id": 1 00:18:47.885 } 00:18:47.885 Got JSON-RPC error response 00:18:47.885 response: 00:18:47.885 { 00:18:47.885 "code": -17, 00:18:47.885 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:47.885 } 00:18:47.885 04:59:11 -- common/autotest_common.sh@653 -- # es=1 00:18:47.885 04:59:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:47.885 04:59:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:47.885 04:59:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:47.885 04:59:11 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:47.885 04:59:11 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.144 04:59:11 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:48.144 04:59:11 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:48.144 04:59:11 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:48.144 [2024-11-18 04:59:11.663910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:48.144 [2024-11-18 04:59:11.663999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.144 [2024-11-18 04:59:11.664048] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:18:48.144 [2024-11-18 04:59:11.664061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.144 [2024-11-18 04:59:11.666765] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.144 [2024-11-18 04:59:11.666810] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:48.144 [2024-11-18 04:59:11.666918] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:48.144 [2024-11-18 04:59:11.666992] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:48.403 pt1 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.403 "name": "raid_bdev1", 00:18:48.403 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:48.403 "strip_size_kb": 0, 00:18:48.403 "state": "configuring", 00:18:48.403 "raid_level": "raid1", 00:18:48.403 "superblock": true, 00:18:48.403 "num_base_bdevs": 4, 00:18:48.403 "num_base_bdevs_discovered": 1, 00:18:48.403 "num_base_bdevs_operational": 4, 00:18:48.403 "base_bdevs_list": [ 00:18:48.403 { 00:18:48.403 "name": "pt1", 00:18:48.403 "uuid": "67773b9f-5982-58b7-8ce0-d71b4ad77ecb", 00:18:48.403 "is_configured": true, 00:18:48.403 "data_offset": 2048, 00:18:48.403 "data_size": 63488 00:18:48.403 }, 00:18:48.403 { 00:18:48.403 "name": null, 00:18:48.403 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:48.403 "is_configured": false, 00:18:48.403 "data_offset": 2048, 00:18:48.403 "data_size": 63488 00:18:48.403 }, 00:18:48.403 { 00:18:48.403 "name": null, 00:18:48.403 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:48.403 "is_configured": false, 00:18:48.403 "data_offset": 2048, 00:18:48.403 "data_size": 63488 00:18:48.403 }, 00:18:48.403 { 00:18:48.403 "name": null, 00:18:48.403 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:48.403 "is_configured": false, 00:18:48.403 "data_offset": 2048, 00:18:48.403 "data_size": 63488 00:18:48.403 } 00:18:48.403 ] 00:18:48.403 }' 00:18:48.403 04:59:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.403 04:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:48.970 04:59:12 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:48.970 04:59:12 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:48.970 [2024-11-18 04:59:12.468150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:48.970 [2024-11-18 04:59:12.468294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.970 [2024-11-18 04:59:12.468335] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:18:48.970 [2024-11-18 04:59:12.468350] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.970 [2024-11-18 04:59:12.468838] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.970 [2024-11-18 04:59:12.468884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:48.970 [2024-11-18 04:59:12.468982] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:48.970 [2024-11-18 04:59:12.469008] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:48.970 pt2 00:18:48.970 04:59:12 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:49.228 [2024-11-18 04:59:12.716298] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.228 04:59:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.487 04:59:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:49.487 "name": "raid_bdev1", 00:18:49.487 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:49.487 "strip_size_kb": 0, 00:18:49.487 "state": "configuring", 00:18:49.487 "raid_level": "raid1", 00:18:49.487 "superblock": true, 00:18:49.487 "num_base_bdevs": 4, 00:18:49.487 "num_base_bdevs_discovered": 1, 00:18:49.487 "num_base_bdevs_operational": 4, 00:18:49.487 "base_bdevs_list": [ 00:18:49.487 { 00:18:49.487 "name": "pt1", 00:18:49.487 "uuid": "67773b9f-5982-58b7-8ce0-d71b4ad77ecb", 00:18:49.487 "is_configured": true, 00:18:49.487 "data_offset": 2048, 00:18:49.487 "data_size": 63488 00:18:49.487 }, 00:18:49.487 { 00:18:49.487 "name": null, 00:18:49.487 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:49.487 "is_configured": false, 00:18:49.487 "data_offset": 2048, 00:18:49.487 "data_size": 63488 00:18:49.487 }, 00:18:49.487 { 00:18:49.487 "name": null, 00:18:49.487 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:49.487 "is_configured": false, 00:18:49.487 "data_offset": 2048, 00:18:49.487 "data_size": 63488 00:18:49.487 }, 00:18:49.487 { 00:18:49.487 "name": null, 00:18:49.487 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:49.487 "is_configured": false, 00:18:49.487 "data_offset": 2048, 00:18:49.487 "data_size": 63488 00:18:49.487 } 00:18:49.487 ] 00:18:49.487 }' 00:18:49.487 04:59:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:49.487 04:59:12 -- common/autotest_common.sh@10 -- # set +x 00:18:50.054 04:59:13 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:50.054 04:59:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:50.054 04:59:13 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:50.054 [2024-11-18 04:59:13.472434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:50.054 [2024-11-18 04:59:13.472523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.054 [2024-11-18 04:59:13.472551] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:18:50.054 [2024-11-18 04:59:13.472567] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.054 [2024-11-18 04:59:13.473040] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.054 [2024-11-18 04:59:13.473069] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:50.054 [2024-11-18 04:59:13.473163] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:50.054 [2024-11-18 04:59:13.473197] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:50.054 pt2 00:18:50.054 04:59:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:50.054 04:59:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:50.054 04:59:13 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:50.313 [2024-11-18 04:59:13.716529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:50.313 [2024-11-18 04:59:13.716616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.313 [2024-11-18 04:59:13.716645] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:18:50.313 [2024-11-18 04:59:13.716661] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.313 [2024-11-18 04:59:13.717149] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.313 [2024-11-18 04:59:13.717185] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:50.313 [2024-11-18 04:59:13.717308] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:50.313 [2024-11-18 04:59:13.717341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:50.313 pt3 00:18:50.313 04:59:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:50.313 04:59:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:50.313 04:59:13 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:50.572 [2024-11-18 04:59:13.968599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:50.572 [2024-11-18 04:59:13.968712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.572 [2024-11-18 04:59:13.968742] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:18:50.572 [2024-11-18 04:59:13.968946] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.572 [2024-11-18 04:59:13.969413] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.572 [2024-11-18 04:59:13.969463] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:50.572 [2024-11-18 04:59:13.969558] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:50.572 [2024-11-18 04:59:13.969598] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:50.572 [2024-11-18 04:59:13.969750] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:18:50.572 [2024-11-18 04:59:13.969769] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:50.572 [2024-11-18 04:59:13.969870] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:50.572 [2024-11-18 04:59:13.970256] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:18:50.572 [2024-11-18 04:59:13.970272] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:18:50.572 [2024-11-18 04:59:13.970419] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.572 pt4 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.572 04:59:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.831 04:59:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.831 "name": "raid_bdev1", 00:18:50.831 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:50.831 "strip_size_kb": 0, 00:18:50.831 "state": "online", 00:18:50.831 "raid_level": "raid1", 00:18:50.831 "superblock": true, 00:18:50.831 "num_base_bdevs": 4, 00:18:50.831 "num_base_bdevs_discovered": 4, 00:18:50.831 "num_base_bdevs_operational": 4, 00:18:50.831 "base_bdevs_list": [ 00:18:50.831 { 00:18:50.831 "name": "pt1", 00:18:50.831 "uuid": "67773b9f-5982-58b7-8ce0-d71b4ad77ecb", 00:18:50.831 "is_configured": true, 00:18:50.831 "data_offset": 2048, 00:18:50.831 "data_size": 63488 00:18:50.831 }, 00:18:50.831 { 00:18:50.831 "name": "pt2", 00:18:50.831 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:50.831 "is_configured": true, 00:18:50.831 "data_offset": 2048, 00:18:50.831 "data_size": 63488 00:18:50.831 }, 00:18:50.831 { 00:18:50.831 "name": "pt3", 00:18:50.831 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:50.831 "is_configured": true, 00:18:50.831 "data_offset": 2048, 00:18:50.831 "data_size": 63488 00:18:50.831 }, 00:18:50.831 { 00:18:50.831 "name": "pt4", 00:18:50.831 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:50.831 "is_configured": true, 00:18:50.831 "data_offset": 2048, 00:18:50.831 "data_size": 63488 00:18:50.831 } 00:18:50.831 ] 00:18:50.831 }' 00:18:50.831 04:59:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.831 04:59:14 -- common/autotest_common.sh@10 -- # set +x 00:18:51.089 04:59:14 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:51.089 04:59:14 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:51.347 [2024-11-18 04:59:14.729047] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.347 04:59:14 -- bdev/bdev_raid.sh@430 -- # '[' f81ccf36-7442-473a-b3ed-34bb04acc439 '!=' f81ccf36-7442-473a-b3ed-34bb04acc439 ']' 00:18:51.347 04:59:14 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:51.347 04:59:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:51.347 04:59:14 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:51.347 04:59:14 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:51.606 [2024-11-18 04:59:14.976943] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.606 04:59:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.864 04:59:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.864 "name": "raid_bdev1", 00:18:51.864 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:51.864 "strip_size_kb": 0, 00:18:51.864 "state": "online", 00:18:51.864 "raid_level": "raid1", 00:18:51.864 "superblock": true, 00:18:51.864 "num_base_bdevs": 4, 00:18:51.864 "num_base_bdevs_discovered": 3, 00:18:51.864 "num_base_bdevs_operational": 3, 00:18:51.864 "base_bdevs_list": [ 00:18:51.864 { 00:18:51.864 "name": null, 00:18:51.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.864 "is_configured": false, 00:18:51.864 "data_offset": 2048, 00:18:51.864 "data_size": 63488 00:18:51.864 }, 00:18:51.864 { 00:18:51.864 "name": "pt2", 00:18:51.864 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:51.864 "is_configured": true, 00:18:51.864 "data_offset": 2048, 00:18:51.864 "data_size": 63488 00:18:51.864 }, 00:18:51.864 { 00:18:51.864 "name": "pt3", 00:18:51.864 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:51.864 "is_configured": true, 00:18:51.864 "data_offset": 2048, 00:18:51.864 "data_size": 63488 00:18:51.864 }, 00:18:51.864 { 00:18:51.864 "name": "pt4", 00:18:51.864 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:51.864 "is_configured": true, 00:18:51.864 "data_offset": 2048, 00:18:51.864 "data_size": 63488 00:18:51.864 } 00:18:51.864 ] 00:18:51.864 }' 00:18:51.864 04:59:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.864 04:59:15 -- common/autotest_common.sh@10 -- # set +x 00:18:52.123 04:59:15 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:52.381 [2024-11-18 04:59:15.727679] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.381 [2024-11-18 04:59:15.727716] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.381 [2024-11-18 04:59:15.727792] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.381 [2024-11-18 04:59:15.727879] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.381 [2024-11-18 04:59:15.727893] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:18:52.381 04:59:15 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.381 04:59:15 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:52.640 04:59:15 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:52.640 04:59:15 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:52.640 04:59:15 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:52.640 04:59:15 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:52.640 04:59:15 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:52.899 04:59:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:52.899 04:59:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:52.899 04:59:16 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:53.158 04:59:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:53.158 04:59:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:53.158 04:59:16 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:53.416 [2024-11-18 04:59:16.871995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:53.416 [2024-11-18 04:59:16.872100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.416 [2024-11-18 04:59:16.872138] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:18:53.416 [2024-11-18 04:59:16.872152] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.416 [2024-11-18 04:59:16.874717] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.416 [2024-11-18 04:59:16.874762] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:53.416 [2024-11-18 04:59:16.874874] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:53.416 [2024-11-18 04:59:16.874973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:53.416 pt2 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.416 04:59:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.675 04:59:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.675 "name": "raid_bdev1", 00:18:53.675 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:53.675 "strip_size_kb": 0, 00:18:53.675 "state": "configuring", 00:18:53.675 "raid_level": "raid1", 00:18:53.675 "superblock": true, 00:18:53.675 "num_base_bdevs": 4, 00:18:53.675 "num_base_bdevs_discovered": 1, 00:18:53.675 "num_base_bdevs_operational": 3, 00:18:53.675 "base_bdevs_list": [ 00:18:53.675 { 00:18:53.675 "name": null, 00:18:53.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.675 "is_configured": false, 00:18:53.675 "data_offset": 2048, 00:18:53.675 "data_size": 63488 00:18:53.675 }, 00:18:53.675 { 00:18:53.675 "name": "pt2", 00:18:53.675 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:53.675 "is_configured": true, 00:18:53.675 "data_offset": 2048, 00:18:53.675 "data_size": 63488 00:18:53.675 }, 00:18:53.675 { 00:18:53.675 "name": null, 00:18:53.675 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:53.675 "is_configured": false, 00:18:53.675 "data_offset": 2048, 00:18:53.675 "data_size": 63488 00:18:53.675 }, 00:18:53.675 { 00:18:53.675 "name": null, 00:18:53.675 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:53.675 "is_configured": false, 00:18:53.675 "data_offset": 2048, 00:18:53.675 "data_size": 63488 00:18:53.675 } 00:18:53.675 ] 00:18:53.675 }' 00:18:53.675 04:59:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.675 04:59:17 -- common/autotest_common.sh@10 -- # set +x 00:18:53.934 04:59:17 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:53.934 04:59:17 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:53.934 04:59:17 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:54.191 [2024-11-18 04:59:17.656219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:54.191 [2024-11-18 04:59:17.656311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.191 [2024-11-18 04:59:17.656346] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:18:54.191 [2024-11-18 04:59:17.656360] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.191 [2024-11-18 04:59:17.656852] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.191 [2024-11-18 04:59:17.656875] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:54.191 [2024-11-18 04:59:17.656969] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:54.191 [2024-11-18 04:59:17.656994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:54.191 pt3 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.191 04:59:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.758 04:59:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.758 "name": "raid_bdev1", 00:18:54.758 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:54.758 "strip_size_kb": 0, 00:18:54.758 "state": "configuring", 00:18:54.758 "raid_level": "raid1", 00:18:54.758 "superblock": true, 00:18:54.758 "num_base_bdevs": 4, 00:18:54.758 "num_base_bdevs_discovered": 2, 00:18:54.758 "num_base_bdevs_operational": 3, 00:18:54.758 "base_bdevs_list": [ 00:18:54.758 { 00:18:54.758 "name": null, 00:18:54.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.758 "is_configured": false, 00:18:54.758 "data_offset": 2048, 00:18:54.758 "data_size": 63488 00:18:54.758 }, 00:18:54.758 { 00:18:54.758 "name": "pt2", 00:18:54.758 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:54.758 "is_configured": true, 00:18:54.758 "data_offset": 2048, 00:18:54.758 "data_size": 63488 00:18:54.758 }, 00:18:54.758 { 00:18:54.758 "name": "pt3", 00:18:54.758 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:54.758 "is_configured": true, 00:18:54.758 "data_offset": 2048, 00:18:54.758 "data_size": 63488 00:18:54.758 }, 00:18:54.758 { 00:18:54.758 "name": null, 00:18:54.758 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:54.758 "is_configured": false, 00:18:54.758 "data_offset": 2048, 00:18:54.758 "data_size": 63488 00:18:54.758 } 00:18:54.758 ] 00:18:54.758 }' 00:18:54.758 04:59:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.758 04:59:17 -- common/autotest_common.sh@10 -- # set +x 00:18:54.758 04:59:18 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:54.758 04:59:18 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:54.758 04:59:18 -- bdev/bdev_raid.sh@462 -- # i=3 00:18:54.758 04:59:18 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:55.017 [2024-11-18 04:59:18.488518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:55.017 [2024-11-18 04:59:18.488667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.017 [2024-11-18 04:59:18.488710] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:18:55.017 [2024-11-18 04:59:18.488725] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.017 [2024-11-18 04:59:18.489288] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.017 [2024-11-18 04:59:18.489324] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:55.017 [2024-11-18 04:59:18.489449] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:55.017 [2024-11-18 04:59:18.489523] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:55.017 [2024-11-18 04:59:18.489692] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ba80 00:18:55.017 [2024-11-18 04:59:18.489715] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:55.017 [2024-11-18 04:59:18.489824] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:18:55.017 [2024-11-18 04:59:18.490293] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ba80 00:18:55.017 [2024-11-18 04:59:18.490322] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ba80 00:18:55.017 [2024-11-18 04:59:18.490478] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.017 pt4 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.017 04:59:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.276 04:59:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.276 "name": "raid_bdev1", 00:18:55.276 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:55.276 "strip_size_kb": 0, 00:18:55.276 "state": "online", 00:18:55.276 "raid_level": "raid1", 00:18:55.276 "superblock": true, 00:18:55.276 "num_base_bdevs": 4, 00:18:55.276 "num_base_bdevs_discovered": 3, 00:18:55.276 "num_base_bdevs_operational": 3, 00:18:55.276 "base_bdevs_list": [ 00:18:55.276 { 00:18:55.276 "name": null, 00:18:55.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.276 "is_configured": false, 00:18:55.276 "data_offset": 2048, 00:18:55.276 "data_size": 63488 00:18:55.276 }, 00:18:55.276 { 00:18:55.276 "name": "pt2", 00:18:55.276 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:55.276 "is_configured": true, 00:18:55.276 "data_offset": 2048, 00:18:55.276 "data_size": 63488 00:18:55.276 }, 00:18:55.276 { 00:18:55.276 "name": "pt3", 00:18:55.276 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:55.276 "is_configured": true, 00:18:55.276 "data_offset": 2048, 00:18:55.276 "data_size": 63488 00:18:55.276 }, 00:18:55.276 { 00:18:55.276 "name": "pt4", 00:18:55.277 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:55.277 "is_configured": true, 00:18:55.277 "data_offset": 2048, 00:18:55.277 "data_size": 63488 00:18:55.277 } 00:18:55.277 ] 00:18:55.277 }' 00:18:55.277 04:59:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.277 04:59:18 -- common/autotest_common.sh@10 -- # set +x 00:18:55.535 04:59:18 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:18:55.535 04:59:18 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:55.794 [2024-11-18 04:59:19.236770] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.794 [2024-11-18 04:59:19.236804] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.794 [2024-11-18 04:59:19.236899] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.794 [2024-11-18 04:59:19.236975] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.794 [2024-11-18 04:59:19.236994] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state offline 00:18:55.794 04:59:19 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:55.794 04:59:19 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.053 04:59:19 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:56.053 04:59:19 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:56.053 04:59:19 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:56.312 [2024-11-18 04:59:19.708849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:56.312 [2024-11-18 04:59:19.708937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.312 [2024-11-18 04:59:19.708966] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:18:56.312 [2024-11-18 04:59:19.708981] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.312 [2024-11-18 04:59:19.711625] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.312 [2024-11-18 04:59:19.711670] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:56.312 [2024-11-18 04:59:19.711781] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:56.312 [2024-11-18 04:59:19.711840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:56.312 pt1 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.312 04:59:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.571 04:59:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.571 "name": "raid_bdev1", 00:18:56.571 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:56.571 "strip_size_kb": 0, 00:18:56.571 "state": "configuring", 00:18:56.571 "raid_level": "raid1", 00:18:56.571 "superblock": true, 00:18:56.571 "num_base_bdevs": 4, 00:18:56.571 "num_base_bdevs_discovered": 1, 00:18:56.571 "num_base_bdevs_operational": 4, 00:18:56.571 "base_bdevs_list": [ 00:18:56.571 { 00:18:56.571 "name": "pt1", 00:18:56.571 "uuid": "67773b9f-5982-58b7-8ce0-d71b4ad77ecb", 00:18:56.571 "is_configured": true, 00:18:56.571 "data_offset": 2048, 00:18:56.571 "data_size": 63488 00:18:56.571 }, 00:18:56.571 { 00:18:56.571 "name": null, 00:18:56.571 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:56.571 "is_configured": false, 00:18:56.571 "data_offset": 2048, 00:18:56.571 "data_size": 63488 00:18:56.571 }, 00:18:56.571 { 00:18:56.571 "name": null, 00:18:56.571 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:56.571 "is_configured": false, 00:18:56.571 "data_offset": 2048, 00:18:56.571 "data_size": 63488 00:18:56.571 }, 00:18:56.571 { 00:18:56.571 "name": null, 00:18:56.571 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:56.571 "is_configured": false, 00:18:56.571 "data_offset": 2048, 00:18:56.571 "data_size": 63488 00:18:56.571 } 00:18:56.571 ] 00:18:56.571 }' 00:18:56.571 04:59:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.571 04:59:19 -- common/autotest_common.sh@10 -- # set +x 00:18:56.830 04:59:20 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:56.830 04:59:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:56.830 04:59:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:57.088 04:59:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:57.088 04:59:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:57.088 04:59:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:57.347 04:59:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:57.347 04:59:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:57.347 04:59:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:57.608 04:59:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:57.608 04:59:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:57.608 04:59:20 -- bdev/bdev_raid.sh@489 -- # i=3 00:18:57.608 04:59:20 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:57.868 [2024-11-18 04:59:21.154480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:57.868 [2024-11-18 04:59:21.154600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.868 [2024-11-18 04:59:21.154631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:18:57.868 [2024-11-18 04:59:21.154647] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.868 [2024-11-18 04:59:21.155125] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.868 [2024-11-18 04:59:21.155163] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:57.868 [2024-11-18 04:59:21.155297] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:57.868 [2024-11-18 04:59:21.155324] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:57.868 [2024-11-18 04:59:21.155336] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.868 [2024-11-18 04:59:21.155363] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c980 name raid_bdev1, state configuring 00:18:57.868 [2024-11-18 04:59:21.155435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:57.868 pt4 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.868 04:59:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.126 04:59:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.126 "name": "raid_bdev1", 00:18:58.126 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:58.126 "strip_size_kb": 0, 00:18:58.126 "state": "configuring", 00:18:58.126 "raid_level": "raid1", 00:18:58.126 "superblock": true, 00:18:58.126 "num_base_bdevs": 4, 00:18:58.126 "num_base_bdevs_discovered": 1, 00:18:58.126 "num_base_bdevs_operational": 3, 00:18:58.126 "base_bdevs_list": [ 00:18:58.126 { 00:18:58.126 "name": null, 00:18:58.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.126 "is_configured": false, 00:18:58.126 "data_offset": 2048, 00:18:58.126 "data_size": 63488 00:18:58.126 }, 00:18:58.126 { 00:18:58.126 "name": null, 00:18:58.126 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:58.126 "is_configured": false, 00:18:58.126 "data_offset": 2048, 00:18:58.126 "data_size": 63488 00:18:58.126 }, 00:18:58.127 { 00:18:58.127 "name": null, 00:18:58.127 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:58.127 "is_configured": false, 00:18:58.127 "data_offset": 2048, 00:18:58.127 "data_size": 63488 00:18:58.127 }, 00:18:58.127 { 00:18:58.127 "name": "pt4", 00:18:58.127 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:58.127 "is_configured": true, 00:18:58.127 "data_offset": 2048, 00:18:58.127 "data_size": 63488 00:18:58.127 } 00:18:58.127 ] 00:18:58.127 }' 00:18:58.127 04:59:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.127 04:59:21 -- common/autotest_common.sh@10 -- # set +x 00:18:58.385 04:59:21 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:58.385 04:59:21 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:58.385 04:59:21 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:58.385 [2024-11-18 04:59:21.906932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:58.385 [2024-11-18 04:59:21.907171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.385 [2024-11-18 04:59:21.907382] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:18:58.385 [2024-11-18 04:59:21.907533] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.643 [2024-11-18 04:59:21.908194] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.643 [2024-11-18 04:59:21.908388] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:58.643 [2024-11-18 04:59:21.908615] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:58.643 [2024-11-18 04:59:21.908654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:58.643 pt2 00:18:58.643 04:59:21 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:58.643 04:59:21 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:58.643 04:59:21 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:58.643 [2024-11-18 04:59:22.131567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:58.643 [2024-11-18 04:59:22.131833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.643 [2024-11-18 04:59:22.132004] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d580 00:18:58.643 [2024-11-18 04:59:22.132161] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.643 [2024-11-18 04:59:22.132731] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.643 [2024-11-18 04:59:22.132783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:58.643 [2024-11-18 04:59:22.132906] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:58.643 [2024-11-18 04:59:22.132941] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:58.643 [2024-11-18 04:59:22.133089] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:18:58.643 [2024-11-18 04:59:22.133103] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:58.643 [2024-11-18 04:59:22.133227] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:18:58.643 [2024-11-18 04:59:22.133648] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:18:58.643 [2024-11-18 04:59:22.133682] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:18:58.643 [2024-11-18 04:59:22.133849] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.643 pt3 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.643 04:59:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.901 04:59:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.901 "name": "raid_bdev1", 00:18:58.901 "uuid": "f81ccf36-7442-473a-b3ed-34bb04acc439", 00:18:58.901 "strip_size_kb": 0, 00:18:58.902 "state": "online", 00:18:58.902 "raid_level": "raid1", 00:18:58.902 "superblock": true, 00:18:58.902 "num_base_bdevs": 4, 00:18:58.902 "num_base_bdevs_discovered": 3, 00:18:58.902 "num_base_bdevs_operational": 3, 00:18:58.902 "base_bdevs_list": [ 00:18:58.902 { 00:18:58.902 "name": null, 00:18:58.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.902 "is_configured": false, 00:18:58.902 "data_offset": 2048, 00:18:58.902 "data_size": 63488 00:18:58.902 }, 00:18:58.902 { 00:18:58.902 "name": "pt2", 00:18:58.902 "uuid": "88e020a1-8cec-5b4c-bd38-a85de54925e3", 00:18:58.902 "is_configured": true, 00:18:58.902 "data_offset": 2048, 00:18:58.902 "data_size": 63488 00:18:58.902 }, 00:18:58.902 { 00:18:58.902 "name": "pt3", 00:18:58.902 "uuid": "370f9f5a-e567-531e-92da-088ce06504b2", 00:18:58.902 "is_configured": true, 00:18:58.902 "data_offset": 2048, 00:18:58.902 "data_size": 63488 00:18:58.902 }, 00:18:58.902 { 00:18:58.902 "name": "pt4", 00:18:58.902 "uuid": "979d7e6a-b23f-5a4d-b0da-3e1cb932d705", 00:18:58.902 "is_configured": true, 00:18:58.902 "data_offset": 2048, 00:18:58.902 "data_size": 63488 00:18:58.902 } 00:18:58.902 ] 00:18:58.902 }' 00:18:58.902 04:59:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.902 04:59:22 -- common/autotest_common.sh@10 -- # set +x 00:18:59.468 04:59:22 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:59.468 04:59:22 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:59.468 [2024-11-18 04:59:22.904091] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.468 04:59:22 -- bdev/bdev_raid.sh@506 -- # '[' f81ccf36-7442-473a-b3ed-34bb04acc439 '!=' f81ccf36-7442-473a-b3ed-34bb04acc439 ']' 00:18:59.468 04:59:22 -- bdev/bdev_raid.sh@511 -- # killprocess 77585 00:18:59.468 04:59:22 -- common/autotest_common.sh@936 -- # '[' -z 77585 ']' 00:18:59.468 04:59:22 -- common/autotest_common.sh@940 -- # kill -0 77585 00:18:59.468 04:59:22 -- common/autotest_common.sh@941 -- # uname 00:18:59.468 04:59:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:59.468 04:59:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77585 00:18:59.468 killing process with pid 77585 00:18:59.468 04:59:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:59.468 04:59:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:59.468 04:59:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77585' 00:18:59.468 04:59:22 -- common/autotest_common.sh@955 -- # kill 77585 00:18:59.468 [2024-11-18 04:59:22.954118] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.468 [2024-11-18 04:59:22.954231] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.468 [2024-11-18 04:59:22.954354] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.469 [2024-11-18 04:59:22.954376] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:18:59.469 04:59:22 -- common/autotest_common.sh@960 -- # wait 77585 00:19:00.035 [2024-11-18 04:59:23.270675] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:00.970 ************************************ 00:19:00.970 END TEST raid_superblock_test 00:19:00.970 ************************************ 00:19:00.970 00:19:00.970 real 0m18.950s 00:19:00.970 user 0m32.900s 00:19:00.970 sys 0m2.753s 00:19:00.970 04:59:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:00.970 04:59:24 -- common/autotest_common.sh@10 -- # set +x 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:19:00.970 04:59:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:00.970 04:59:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:00.970 04:59:24 -- common/autotest_common.sh@10 -- # set +x 00:19:00.970 ************************************ 00:19:00.970 START TEST raid_rebuild_test 00:19:00.970 ************************************ 00:19:00.970 04:59:24 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false false 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:00.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@544 -- # raid_pid=78200 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@545 -- # waitforlisten 78200 /var/tmp/spdk-raid.sock 00:19:00.970 04:59:24 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:00.970 04:59:24 -- common/autotest_common.sh@829 -- # '[' -z 78200 ']' 00:19:00.970 04:59:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:00.970 04:59:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.970 04:59:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:00.970 04:59:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.970 04:59:24 -- common/autotest_common.sh@10 -- # set +x 00:19:00.970 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:00.970 Zero copy mechanism will not be used. 00:19:00.970 [2024-11-18 04:59:24.482978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:00.970 [2024-11-18 04:59:24.483117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78200 ] 00:19:01.229 [2024-11-18 04:59:24.648169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.487 [2024-11-18 04:59:24.873394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.745 [2024-11-18 04:59:25.046635] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.004 04:59:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.004 04:59:25 -- common/autotest_common.sh@862 -- # return 0 00:19:02.004 04:59:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:02.004 04:59:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:02.004 04:59:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:02.262 BaseBdev1 00:19:02.262 04:59:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:02.262 04:59:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:02.262 04:59:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:02.521 BaseBdev2 00:19:02.521 04:59:25 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:02.779 spare_malloc 00:19:02.779 04:59:26 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:03.037 spare_delay 00:19:03.037 04:59:26 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:03.296 [2024-11-18 04:59:26.635976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:03.296 [2024-11-18 04:59:26.636078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.296 [2024-11-18 04:59:26.636124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:19:03.296 [2024-11-18 04:59:26.636142] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.296 [2024-11-18 04:59:26.639085] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.296 [2024-11-18 04:59:26.639159] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:03.296 spare 00:19:03.296 04:59:26 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:03.555 [2024-11-18 04:59:26.848158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.555 [2024-11-18 04:59:26.850297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.555 [2024-11-18 04:59:26.850417] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:19:03.555 [2024-11-18 04:59:26.850438] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:03.555 [2024-11-18 04:59:26.850655] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:19:03.555 [2024-11-18 04:59:26.851121] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:19:03.555 [2024-11-18 04:59:26.851146] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:19:03.555 [2024-11-18 04:59:26.851387] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.555 04:59:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.814 04:59:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.814 "name": "raid_bdev1", 00:19:03.814 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:03.814 "strip_size_kb": 0, 00:19:03.814 "state": "online", 00:19:03.814 "raid_level": "raid1", 00:19:03.814 "superblock": false, 00:19:03.814 "num_base_bdevs": 2, 00:19:03.814 "num_base_bdevs_discovered": 2, 00:19:03.814 "num_base_bdevs_operational": 2, 00:19:03.814 "base_bdevs_list": [ 00:19:03.814 { 00:19:03.814 "name": "BaseBdev1", 00:19:03.814 "uuid": "98488752-3844-4e8b-b694-c848a1dc3ae2", 00:19:03.814 "is_configured": true, 00:19:03.814 "data_offset": 0, 00:19:03.814 "data_size": 65536 00:19:03.814 }, 00:19:03.814 { 00:19:03.814 "name": "BaseBdev2", 00:19:03.814 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:03.814 "is_configured": true, 00:19:03.814 "data_offset": 0, 00:19:03.814 "data_size": 65536 00:19:03.814 } 00:19:03.814 ] 00:19:03.814 }' 00:19:03.814 04:59:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.814 04:59:27 -- common/autotest_common.sh@10 -- # set +x 00:19:04.073 04:59:27 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:04.073 04:59:27 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:04.332 [2024-11-18 04:59:27.692666] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.332 04:59:27 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:04.332 04:59:27 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.332 04:59:27 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:04.591 04:59:27 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:04.591 04:59:27 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:04.591 04:59:27 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:04.591 04:59:27 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:04.591 04:59:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:04.591 04:59:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:04.591 04:59:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:04.591 04:59:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:04.591 04:59:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:04.591 04:59:27 -- bdev/nbd_common.sh@12 -- # local i 00:19:04.591 04:59:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:04.591 04:59:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.591 04:59:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:04.850 [2024-11-18 04:59:28.176670] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:04.850 /dev/nbd0 00:19:04.850 04:59:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:04.850 04:59:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:04.850 04:59:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:04.850 04:59:28 -- common/autotest_common.sh@867 -- # local i 00:19:04.850 04:59:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:04.850 04:59:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:04.850 04:59:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:04.850 04:59:28 -- common/autotest_common.sh@871 -- # break 00:19:04.850 04:59:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:04.850 04:59:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:04.850 04:59:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.850 1+0 records in 00:19:04.850 1+0 records out 00:19:04.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216416 s, 18.9 MB/s 00:19:04.850 04:59:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.850 04:59:28 -- common/autotest_common.sh@884 -- # size=4096 00:19:04.850 04:59:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.850 04:59:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:04.850 04:59:28 -- common/autotest_common.sh@887 -- # return 0 00:19:04.850 04:59:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:04.850 04:59:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.850 04:59:28 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:04.850 04:59:28 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:04.850 04:59:28 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:11.417 65536+0 records in 00:19:11.417 65536+0 records out 00:19:11.417 33554432 bytes (34 MB, 32 MiB) copied, 5.62377 s, 6.0 MB/s 00:19:11.417 04:59:33 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:11.417 04:59:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:11.417 04:59:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:11.417 04:59:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:11.417 04:59:33 -- bdev/nbd_common.sh@51 -- # local i 00:19:11.417 04:59:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:11.417 04:59:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:11.417 [2024-11-18 04:59:34.085735] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.417 04:59:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:11.417 04:59:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:11.417 04:59:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:11.417 04:59:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.417 04:59:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.417 04:59:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:11.417 04:59:34 -- bdev/nbd_common.sh@41 -- # break 00:19:11.417 04:59:34 -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:11.417 [2024-11-18 04:59:34.305884] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.417 "name": "raid_bdev1", 00:19:11.417 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:11.417 "strip_size_kb": 0, 00:19:11.417 "state": "online", 00:19:11.417 "raid_level": "raid1", 00:19:11.417 "superblock": false, 00:19:11.417 "num_base_bdevs": 2, 00:19:11.417 "num_base_bdevs_discovered": 1, 00:19:11.417 "num_base_bdevs_operational": 1, 00:19:11.417 "base_bdevs_list": [ 00:19:11.417 { 00:19:11.417 "name": null, 00:19:11.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.417 "is_configured": false, 00:19:11.417 "data_offset": 0, 00:19:11.417 "data_size": 65536 00:19:11.417 }, 00:19:11.417 { 00:19:11.417 "name": "BaseBdev2", 00:19:11.417 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:11.417 "is_configured": true, 00:19:11.417 "data_offset": 0, 00:19:11.417 "data_size": 65536 00:19:11.417 } 00:19:11.417 ] 00:19:11.417 }' 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.417 04:59:34 -- common/autotest_common.sh@10 -- # set +x 00:19:11.417 04:59:34 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:11.676 [2024-11-18 04:59:35.082054] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:11.676 [2024-11-18 04:59:35.082119] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.677 [2024-11-18 04:59:35.095992] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09480 00:19:11.677 [2024-11-18 04:59:35.098037] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.677 04:59:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:12.613 04:59:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.613 04:59:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:12.613 04:59:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:12.613 04:59:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:12.613 04:59:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:12.613 04:59:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.613 04:59:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.871 04:59:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:12.871 "name": "raid_bdev1", 00:19:12.871 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:12.871 "strip_size_kb": 0, 00:19:12.871 "state": "online", 00:19:12.871 "raid_level": "raid1", 00:19:12.871 "superblock": false, 00:19:12.871 "num_base_bdevs": 2, 00:19:12.871 "num_base_bdevs_discovered": 2, 00:19:12.871 "num_base_bdevs_operational": 2, 00:19:12.871 "process": { 00:19:12.871 "type": "rebuild", 00:19:12.871 "target": "spare", 00:19:12.871 "progress": { 00:19:12.871 "blocks": 24576, 00:19:12.871 "percent": 37 00:19:12.871 } 00:19:12.871 }, 00:19:12.871 "base_bdevs_list": [ 00:19:12.871 { 00:19:12.871 "name": "spare", 00:19:12.871 "uuid": "03809ad7-2754-5fe4-9066-f9b928dff5c1", 00:19:12.871 "is_configured": true, 00:19:12.871 "data_offset": 0, 00:19:12.871 "data_size": 65536 00:19:12.871 }, 00:19:12.871 { 00:19:12.871 "name": "BaseBdev2", 00:19:12.871 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:12.871 "is_configured": true, 00:19:12.871 "data_offset": 0, 00:19:12.871 "data_size": 65536 00:19:12.871 } 00:19:12.871 ] 00:19:12.871 }' 00:19:12.871 04:59:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:12.871 04:59:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.871 04:59:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:12.871 04:59:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.871 04:59:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:13.131 [2024-11-18 04:59:36.584798] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.131 [2024-11-18 04:59:36.605130] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.131 [2024-11-18 04:59:36.605235] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.131 04:59:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.390 04:59:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.390 "name": "raid_bdev1", 00:19:13.390 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:13.390 "strip_size_kb": 0, 00:19:13.390 "state": "online", 00:19:13.390 "raid_level": "raid1", 00:19:13.390 "superblock": false, 00:19:13.390 "num_base_bdevs": 2, 00:19:13.390 "num_base_bdevs_discovered": 1, 00:19:13.390 "num_base_bdevs_operational": 1, 00:19:13.390 "base_bdevs_list": [ 00:19:13.390 { 00:19:13.390 "name": null, 00:19:13.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.390 "is_configured": false, 00:19:13.390 "data_offset": 0, 00:19:13.390 "data_size": 65536 00:19:13.390 }, 00:19:13.390 { 00:19:13.390 "name": "BaseBdev2", 00:19:13.390 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:13.390 "is_configured": true, 00:19:13.390 "data_offset": 0, 00:19:13.390 "data_size": 65536 00:19:13.390 } 00:19:13.390 ] 00:19:13.390 }' 00:19:13.390 04:59:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.390 04:59:36 -- common/autotest_common.sh@10 -- # set +x 00:19:13.958 04:59:37 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.958 04:59:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:13.958 04:59:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:13.958 04:59:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:13.958 04:59:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:13.958 04:59:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.958 04:59:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.958 04:59:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:13.958 "name": "raid_bdev1", 00:19:13.958 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:13.958 "strip_size_kb": 0, 00:19:13.958 "state": "online", 00:19:13.958 "raid_level": "raid1", 00:19:13.958 "superblock": false, 00:19:13.958 "num_base_bdevs": 2, 00:19:13.958 "num_base_bdevs_discovered": 1, 00:19:13.959 "num_base_bdevs_operational": 1, 00:19:13.959 "base_bdevs_list": [ 00:19:13.959 { 00:19:13.959 "name": null, 00:19:13.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.959 "is_configured": false, 00:19:13.959 "data_offset": 0, 00:19:13.959 "data_size": 65536 00:19:13.959 }, 00:19:13.959 { 00:19:13.959 "name": "BaseBdev2", 00:19:13.959 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:13.959 "is_configured": true, 00:19:13.959 "data_offset": 0, 00:19:13.959 "data_size": 65536 00:19:13.959 } 00:19:13.959 ] 00:19:13.959 }' 00:19:13.959 04:59:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:13.959 04:59:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:13.959 04:59:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:13.959 04:59:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:13.959 04:59:37 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:14.217 [2024-11-18 04:59:37.676703] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:14.218 [2024-11-18 04:59:37.676755] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.218 [2024-11-18 04:59:37.690612] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09550 00:19:14.218 [2024-11-18 04:59:37.692753] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:14.218 04:59:37 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:15.595 "name": "raid_bdev1", 00:19:15.595 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:15.595 "strip_size_kb": 0, 00:19:15.595 "state": "online", 00:19:15.595 "raid_level": "raid1", 00:19:15.595 "superblock": false, 00:19:15.595 "num_base_bdevs": 2, 00:19:15.595 "num_base_bdevs_discovered": 2, 00:19:15.595 "num_base_bdevs_operational": 2, 00:19:15.595 "process": { 00:19:15.595 "type": "rebuild", 00:19:15.595 "target": "spare", 00:19:15.595 "progress": { 00:19:15.595 "blocks": 24576, 00:19:15.595 "percent": 37 00:19:15.595 } 00:19:15.595 }, 00:19:15.595 "base_bdevs_list": [ 00:19:15.595 { 00:19:15.595 "name": "spare", 00:19:15.595 "uuid": "03809ad7-2754-5fe4-9066-f9b928dff5c1", 00:19:15.595 "is_configured": true, 00:19:15.595 "data_offset": 0, 00:19:15.595 "data_size": 65536 00:19:15.595 }, 00:19:15.595 { 00:19:15.595 "name": "BaseBdev2", 00:19:15.595 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:15.595 "is_configured": true, 00:19:15.595 "data_offset": 0, 00:19:15.595 "data_size": 65536 00:19:15.595 } 00:19:15.595 ] 00:19:15.595 }' 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@657 -- # local timeout=352 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.595 04:59:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.853 04:59:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:15.853 "name": "raid_bdev1", 00:19:15.853 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:15.853 "strip_size_kb": 0, 00:19:15.853 "state": "online", 00:19:15.853 "raid_level": "raid1", 00:19:15.853 "superblock": false, 00:19:15.853 "num_base_bdevs": 2, 00:19:15.853 "num_base_bdevs_discovered": 2, 00:19:15.853 "num_base_bdevs_operational": 2, 00:19:15.853 "process": { 00:19:15.853 "type": "rebuild", 00:19:15.853 "target": "spare", 00:19:15.853 "progress": { 00:19:15.853 "blocks": 30720, 00:19:15.853 "percent": 46 00:19:15.853 } 00:19:15.853 }, 00:19:15.853 "base_bdevs_list": [ 00:19:15.853 { 00:19:15.853 "name": "spare", 00:19:15.853 "uuid": "03809ad7-2754-5fe4-9066-f9b928dff5c1", 00:19:15.853 "is_configured": true, 00:19:15.853 "data_offset": 0, 00:19:15.853 "data_size": 65536 00:19:15.853 }, 00:19:15.853 { 00:19:15.853 "name": "BaseBdev2", 00:19:15.853 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:15.853 "is_configured": true, 00:19:15.853 "data_offset": 0, 00:19:15.853 "data_size": 65536 00:19:15.853 } 00:19:15.853 ] 00:19:15.853 }' 00:19:15.853 04:59:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:15.853 04:59:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.853 04:59:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:15.853 04:59:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.853 04:59:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:16.788 04:59:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:16.788 04:59:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.788 04:59:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:16.788 04:59:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:16.788 04:59:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:16.788 04:59:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:16.788 04:59:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.788 04:59:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.047 04:59:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:17.047 "name": "raid_bdev1", 00:19:17.047 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:17.047 "strip_size_kb": 0, 00:19:17.047 "state": "online", 00:19:17.047 "raid_level": "raid1", 00:19:17.047 "superblock": false, 00:19:17.047 "num_base_bdevs": 2, 00:19:17.047 "num_base_bdevs_discovered": 2, 00:19:17.047 "num_base_bdevs_operational": 2, 00:19:17.047 "process": { 00:19:17.047 "type": "rebuild", 00:19:17.047 "target": "spare", 00:19:17.047 "progress": { 00:19:17.047 "blocks": 57344, 00:19:17.047 "percent": 87 00:19:17.047 } 00:19:17.047 }, 00:19:17.047 "base_bdevs_list": [ 00:19:17.047 { 00:19:17.047 "name": "spare", 00:19:17.047 "uuid": "03809ad7-2754-5fe4-9066-f9b928dff5c1", 00:19:17.047 "is_configured": true, 00:19:17.047 "data_offset": 0, 00:19:17.047 "data_size": 65536 00:19:17.047 }, 00:19:17.047 { 00:19:17.047 "name": "BaseBdev2", 00:19:17.047 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:17.047 "is_configured": true, 00:19:17.047 "data_offset": 0, 00:19:17.047 "data_size": 65536 00:19:17.047 } 00:19:17.047 ] 00:19:17.047 }' 00:19:17.047 04:59:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:17.047 04:59:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.047 04:59:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:17.306 04:59:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.306 04:59:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:17.566 [2024-11-18 04:59:40.908651] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:17.566 [2024-11-18 04:59:40.908760] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:17.566 [2024-11-18 04:59:40.908831] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.134 04:59:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:18.134 04:59:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.134 04:59:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:18.134 04:59:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:18.134 04:59:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:18.134 04:59:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:18.134 04:59:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.134 04:59:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:18.393 "name": "raid_bdev1", 00:19:18.393 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:18.393 "strip_size_kb": 0, 00:19:18.393 "state": "online", 00:19:18.393 "raid_level": "raid1", 00:19:18.393 "superblock": false, 00:19:18.393 "num_base_bdevs": 2, 00:19:18.393 "num_base_bdevs_discovered": 2, 00:19:18.393 "num_base_bdevs_operational": 2, 00:19:18.393 "base_bdevs_list": [ 00:19:18.393 { 00:19:18.393 "name": "spare", 00:19:18.393 "uuid": "03809ad7-2754-5fe4-9066-f9b928dff5c1", 00:19:18.393 "is_configured": true, 00:19:18.393 "data_offset": 0, 00:19:18.393 "data_size": 65536 00:19:18.393 }, 00:19:18.393 { 00:19:18.393 "name": "BaseBdev2", 00:19:18.393 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:18.393 "is_configured": true, 00:19:18.393 "data_offset": 0, 00:19:18.393 "data_size": 65536 00:19:18.393 } 00:19:18.393 ] 00:19:18.393 }' 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@660 -- # break 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.393 04:59:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:18.653 "name": "raid_bdev1", 00:19:18.653 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:18.653 "strip_size_kb": 0, 00:19:18.653 "state": "online", 00:19:18.653 "raid_level": "raid1", 00:19:18.653 "superblock": false, 00:19:18.653 "num_base_bdevs": 2, 00:19:18.653 "num_base_bdevs_discovered": 2, 00:19:18.653 "num_base_bdevs_operational": 2, 00:19:18.653 "base_bdevs_list": [ 00:19:18.653 { 00:19:18.653 "name": "spare", 00:19:18.653 "uuid": "03809ad7-2754-5fe4-9066-f9b928dff5c1", 00:19:18.653 "is_configured": true, 00:19:18.653 "data_offset": 0, 00:19:18.653 "data_size": 65536 00:19:18.653 }, 00:19:18.653 { 00:19:18.653 "name": "BaseBdev2", 00:19:18.653 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:18.653 "is_configured": true, 00:19:18.653 "data_offset": 0, 00:19:18.653 "data_size": 65536 00:19:18.653 } 00:19:18.653 ] 00:19:18.653 }' 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.653 04:59:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.913 04:59:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.913 "name": "raid_bdev1", 00:19:18.913 "uuid": "c481be39-cba4-44c2-a822-c17e3d775f9e", 00:19:18.913 "strip_size_kb": 0, 00:19:18.913 "state": "online", 00:19:18.913 "raid_level": "raid1", 00:19:18.913 "superblock": false, 00:19:18.913 "num_base_bdevs": 2, 00:19:18.913 "num_base_bdevs_discovered": 2, 00:19:18.913 "num_base_bdevs_operational": 2, 00:19:18.913 "base_bdevs_list": [ 00:19:18.913 { 00:19:18.913 "name": "spare", 00:19:18.913 "uuid": "03809ad7-2754-5fe4-9066-f9b928dff5c1", 00:19:18.913 "is_configured": true, 00:19:18.913 "data_offset": 0, 00:19:18.913 "data_size": 65536 00:19:18.913 }, 00:19:18.913 { 00:19:18.913 "name": "BaseBdev2", 00:19:18.913 "uuid": "e0a9252d-e330-45a2-b0a8-22d8027d5c37", 00:19:18.913 "is_configured": true, 00:19:18.913 "data_offset": 0, 00:19:18.913 "data_size": 65536 00:19:18.913 } 00:19:18.913 ] 00:19:18.913 }' 00:19:18.913 04:59:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.913 04:59:42 -- common/autotest_common.sh@10 -- # set +x 00:19:19.481 04:59:42 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:19.481 [2024-11-18 04:59:42.998869] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.481 [2024-11-18 04:59:42.998941] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.481 [2024-11-18 04:59:42.999106] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.481 [2024-11-18 04:59:42.999251] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.481 [2024-11-18 04:59:42.999288] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:19:19.740 04:59:43 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.740 04:59:43 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:19.999 04:59:43 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:19.999 04:59:43 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:19.999 04:59:43 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:19.999 04:59:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:19.999 04:59:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:19.999 04:59:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:19.999 04:59:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:19.999 04:59:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:19.999 04:59:43 -- bdev/nbd_common.sh@12 -- # local i 00:19:19.999 04:59:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:19.999 04:59:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:19.999 04:59:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:19.999 /dev/nbd0 00:19:20.259 04:59:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:20.259 04:59:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:20.259 04:59:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:20.259 04:59:43 -- common/autotest_common.sh@867 -- # local i 00:19:20.259 04:59:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:20.259 04:59:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:20.259 04:59:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:20.259 04:59:43 -- common/autotest_common.sh@871 -- # break 00:19:20.259 04:59:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:20.259 04:59:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:20.259 04:59:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.259 1+0 records in 00:19:20.259 1+0 records out 00:19:20.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264189 s, 15.5 MB/s 00:19:20.259 04:59:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.259 04:59:43 -- common/autotest_common.sh@884 -- # size=4096 00:19:20.259 04:59:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.259 04:59:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:20.259 04:59:43 -- common/autotest_common.sh@887 -- # return 0 00:19:20.259 04:59:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:20.259 04:59:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:20.259 04:59:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:20.259 /dev/nbd1 00:19:20.518 04:59:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:20.518 04:59:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:20.518 04:59:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:20.518 04:59:43 -- common/autotest_common.sh@867 -- # local i 00:19:20.518 04:59:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:20.518 04:59:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:20.518 04:59:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:20.518 04:59:43 -- common/autotest_common.sh@871 -- # break 00:19:20.518 04:59:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:20.518 04:59:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:20.518 04:59:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.518 1+0 records in 00:19:20.518 1+0 records out 00:19:20.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305999 s, 13.4 MB/s 00:19:20.518 04:59:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.518 04:59:43 -- common/autotest_common.sh@884 -- # size=4096 00:19:20.518 04:59:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.518 04:59:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:20.518 04:59:43 -- common/autotest_common.sh@887 -- # return 0 00:19:20.518 04:59:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:20.518 04:59:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:20.518 04:59:43 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:20.518 04:59:44 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:20.518 04:59:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:20.518 04:59:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:20.518 04:59:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:20.518 04:59:44 -- bdev/nbd_common.sh@51 -- # local i 00:19:20.518 04:59:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.518 04:59:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@41 -- # break 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@45 -- # return 0 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@41 -- # break 00:19:21.087 04:59:44 -- bdev/nbd_common.sh@45 -- # return 0 00:19:21.087 04:59:44 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:21.087 04:59:44 -- bdev/bdev_raid.sh@709 -- # killprocess 78200 00:19:21.087 04:59:44 -- common/autotest_common.sh@936 -- # '[' -z 78200 ']' 00:19:21.087 04:59:44 -- common/autotest_common.sh@940 -- # kill -0 78200 00:19:21.087 04:59:44 -- common/autotest_common.sh@941 -- # uname 00:19:21.087 04:59:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:21.087 04:59:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78200 00:19:21.087 04:59:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:21.087 04:59:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:21.087 killing process with pid 78200 00:19:21.087 04:59:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78200' 00:19:21.087 Received shutdown signal, test time was about 60.000000 seconds 00:19:21.087 00:19:21.087 Latency(us) 00:19:21.087 [2024-11-18T04:59:44.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.087 [2024-11-18T04:59:44.611Z] =================================================================================================================== 00:19:21.087 [2024-11-18T04:59:44.611Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:21.087 04:59:44 -- common/autotest_common.sh@955 -- # kill 78200 00:19:21.087 [2024-11-18 04:59:44.599412] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.087 04:59:44 -- common/autotest_common.sh@960 -- # wait 78200 00:19:21.655 [2024-11-18 04:59:44.870223] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:22.593 04:59:46 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:22.593 00:19:22.593 real 0m21.651s 00:19:22.593 user 0m27.321s 00:19:22.593 sys 0m4.297s 00:19:22.593 04:59:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:22.593 04:59:46 -- common/autotest_common.sh@10 -- # set +x 00:19:22.593 ************************************ 00:19:22.593 END TEST raid_rebuild_test 00:19:22.593 ************************************ 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:19:22.852 04:59:46 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:22.852 04:59:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:22.852 04:59:46 -- common/autotest_common.sh@10 -- # set +x 00:19:22.852 ************************************ 00:19:22.852 START TEST raid_rebuild_test_sb 00:19:22.852 ************************************ 00:19:22.852 04:59:46 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true false 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@544 -- # raid_pid=78707 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@545 -- # waitforlisten 78707 /var/tmp/spdk-raid.sock 00:19:22.852 04:59:46 -- common/autotest_common.sh@829 -- # '[' -z 78707 ']' 00:19:22.852 04:59:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:22.852 04:59:46 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:22.852 04:59:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.852 04:59:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:22.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:22.852 04:59:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.852 04:59:46 -- common/autotest_common.sh@10 -- # set +x 00:19:22.852 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:22.852 Zero copy mechanism will not be used. 00:19:22.852 [2024-11-18 04:59:46.205741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:22.852 [2024-11-18 04:59:46.205959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78707 ] 00:19:23.111 [2024-11-18 04:59:46.382205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.111 [2024-11-18 04:59:46.590204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.371 [2024-11-18 04:59:46.784821] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.939 04:59:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.939 04:59:47 -- common/autotest_common.sh@862 -- # return 0 00:19:23.939 04:59:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:23.939 04:59:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:23.939 04:59:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:23.939 BaseBdev1_malloc 00:19:24.198 04:59:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:24.198 [2024-11-18 04:59:47.689655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:24.198 [2024-11-18 04:59:47.689743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.198 [2024-11-18 04:59:47.689783] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:19:24.198 [2024-11-18 04:59:47.689816] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.198 [2024-11-18 04:59:47.692483] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.198 [2024-11-18 04:59:47.692559] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:24.198 BaseBdev1 00:19:24.198 04:59:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:24.198 04:59:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:24.198 04:59:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:24.457 BaseBdev2_malloc 00:19:24.717 04:59:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:24.717 [2024-11-18 04:59:48.205761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:24.717 [2024-11-18 04:59:48.205848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.717 [2024-11-18 04:59:48.205886] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:19:24.717 [2024-11-18 04:59:48.205906] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.717 [2024-11-18 04:59:48.208507] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.717 [2024-11-18 04:59:48.208553] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:24.717 BaseBdev2 00:19:24.717 04:59:48 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:24.976 spare_malloc 00:19:24.976 04:59:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:25.244 spare_delay 00:19:25.244 04:59:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:25.554 [2024-11-18 04:59:48.916206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:25.554 [2024-11-18 04:59:48.916309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.554 [2024-11-18 04:59:48.916345] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:19:25.554 [2024-11-18 04:59:48.916363] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.554 [2024-11-18 04:59:48.919141] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.554 [2024-11-18 04:59:48.919202] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:25.554 spare 00:19:25.554 04:59:48 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:25.813 [2024-11-18 04:59:49.156345] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.813 [2024-11-18 04:59:49.158584] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.813 [2024-11-18 04:59:49.158834] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:19:25.813 [2024-11-18 04:59:49.158858] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:25.813 [2024-11-18 04:59:49.159021] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:19:25.813 [2024-11-18 04:59:49.159475] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:19:25.813 [2024-11-18 04:59:49.159504] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:19:25.813 [2024-11-18 04:59:49.159685] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.813 04:59:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.071 04:59:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.071 "name": "raid_bdev1", 00:19:26.071 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:26.071 "strip_size_kb": 0, 00:19:26.071 "state": "online", 00:19:26.071 "raid_level": "raid1", 00:19:26.071 "superblock": true, 00:19:26.071 "num_base_bdevs": 2, 00:19:26.071 "num_base_bdevs_discovered": 2, 00:19:26.071 "num_base_bdevs_operational": 2, 00:19:26.071 "base_bdevs_list": [ 00:19:26.071 { 00:19:26.071 "name": "BaseBdev1", 00:19:26.071 "uuid": "640af7de-202a-5871-9652-0f357b24b87a", 00:19:26.071 "is_configured": true, 00:19:26.071 "data_offset": 2048, 00:19:26.071 "data_size": 63488 00:19:26.071 }, 00:19:26.071 { 00:19:26.071 "name": "BaseBdev2", 00:19:26.071 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:26.071 "is_configured": true, 00:19:26.071 "data_offset": 2048, 00:19:26.071 "data_size": 63488 00:19:26.071 } 00:19:26.071 ] 00:19:26.071 }' 00:19:26.071 04:59:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.071 04:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:26.329 04:59:49 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:26.329 04:59:49 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:26.587 [2024-11-18 04:59:50.049158] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:26.587 04:59:50 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:19:26.587 04:59:50 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.587 04:59:50 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:26.845 04:59:50 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:19:26.845 04:59:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:26.845 04:59:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:26.846 04:59:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:26.846 04:59:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:26.846 04:59:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:26.846 04:59:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:26.846 04:59:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:26.846 04:59:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:26.846 04:59:50 -- bdev/nbd_common.sh@12 -- # local i 00:19:26.846 04:59:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:26.846 04:59:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.846 04:59:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:27.104 [2024-11-18 04:59:50.569124] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:27.104 /dev/nbd0 00:19:27.104 04:59:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:27.104 04:59:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:27.104 04:59:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:27.104 04:59:50 -- common/autotest_common.sh@867 -- # local i 00:19:27.104 04:59:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:27.104 04:59:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:27.104 04:59:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:27.104 04:59:50 -- common/autotest_common.sh@871 -- # break 00:19:27.104 04:59:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:27.104 04:59:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:27.104 04:59:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.104 1+0 records in 00:19:27.104 1+0 records out 00:19:27.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273943 s, 15.0 MB/s 00:19:27.104 04:59:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.104 04:59:50 -- common/autotest_common.sh@884 -- # size=4096 00:19:27.104 04:59:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.104 04:59:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:27.104 04:59:50 -- common/autotest_common.sh@887 -- # return 0 00:19:27.104 04:59:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.104 04:59:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:27.104 04:59:50 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:27.104 04:59:50 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:27.104 04:59:50 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:33.668 63488+0 records in 00:19:33.668 63488+0 records out 00:19:33.669 32505856 bytes (33 MB, 31 MiB) copied, 6.05775 s, 5.4 MB/s 00:19:33.669 04:59:56 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@51 -- # local i 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:33.669 [2024-11-18 04:59:56.928460] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@41 -- # break 00:19:33.669 04:59:56 -- bdev/nbd_common.sh@45 -- # return 0 00:19:33.669 04:59:56 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:33.669 [2024-11-18 04:59:57.174229] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:33.669 04:59:57 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:33.669 04:59:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.928 "name": "raid_bdev1", 00:19:33.928 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:33.928 "strip_size_kb": 0, 00:19:33.928 "state": "online", 00:19:33.928 "raid_level": "raid1", 00:19:33.928 "superblock": true, 00:19:33.928 "num_base_bdevs": 2, 00:19:33.928 "num_base_bdevs_discovered": 1, 00:19:33.928 "num_base_bdevs_operational": 1, 00:19:33.928 "base_bdevs_list": [ 00:19:33.928 { 00:19:33.928 "name": null, 00:19:33.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.928 "is_configured": false, 00:19:33.928 "data_offset": 2048, 00:19:33.928 "data_size": 63488 00:19:33.928 }, 00:19:33.928 { 00:19:33.928 "name": "BaseBdev2", 00:19:33.928 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:33.928 "is_configured": true, 00:19:33.928 "data_offset": 2048, 00:19:33.928 "data_size": 63488 00:19:33.928 } 00:19:33.928 ] 00:19:33.928 }' 00:19:33.928 04:59:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.928 04:59:57 -- common/autotest_common.sh@10 -- # set +x 00:19:34.187 04:59:57 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:34.445 [2024-11-18 04:59:57.890534] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:34.445 [2024-11-18 04:59:57.890613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.445 [2024-11-18 04:59:57.903517] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2c10 00:19:34.445 [2024-11-18 04:59:57.905481] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:34.445 04:59:57 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:35.821 04:59:58 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.821 04:59:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:35.821 04:59:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:35.821 04:59:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:35.821 04:59:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:35.821 04:59:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.821 04:59:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.821 04:59:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:35.821 "name": "raid_bdev1", 00:19:35.821 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:35.821 "strip_size_kb": 0, 00:19:35.821 "state": "online", 00:19:35.821 "raid_level": "raid1", 00:19:35.821 "superblock": true, 00:19:35.821 "num_base_bdevs": 2, 00:19:35.821 "num_base_bdevs_discovered": 2, 00:19:35.821 "num_base_bdevs_operational": 2, 00:19:35.821 "process": { 00:19:35.821 "type": "rebuild", 00:19:35.821 "target": "spare", 00:19:35.821 "progress": { 00:19:35.821 "blocks": 24576, 00:19:35.821 "percent": 38 00:19:35.821 } 00:19:35.821 }, 00:19:35.821 "base_bdevs_list": [ 00:19:35.821 { 00:19:35.821 "name": "spare", 00:19:35.821 "uuid": "ed83f1f7-2347-5f0e-a81e-a281bafce2ff", 00:19:35.821 "is_configured": true, 00:19:35.821 "data_offset": 2048, 00:19:35.821 "data_size": 63488 00:19:35.821 }, 00:19:35.821 { 00:19:35.821 "name": "BaseBdev2", 00:19:35.821 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:35.821 "is_configured": true, 00:19:35.821 "data_offset": 2048, 00:19:35.821 "data_size": 63488 00:19:35.821 } 00:19:35.821 ] 00:19:35.821 }' 00:19:35.821 04:59:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:35.821 04:59:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.821 04:59:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:35.821 04:59:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.821 04:59:59 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:36.079 [2024-11-18 04:59:59.399553] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:36.079 [2024-11-18 04:59:59.412223] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:36.079 [2024-11-18 04:59:59.412316] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.079 04:59:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.339 04:59:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.339 "name": "raid_bdev1", 00:19:36.339 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:36.339 "strip_size_kb": 0, 00:19:36.339 "state": "online", 00:19:36.339 "raid_level": "raid1", 00:19:36.339 "superblock": true, 00:19:36.339 "num_base_bdevs": 2, 00:19:36.339 "num_base_bdevs_discovered": 1, 00:19:36.339 "num_base_bdevs_operational": 1, 00:19:36.339 "base_bdevs_list": [ 00:19:36.339 { 00:19:36.339 "name": null, 00:19:36.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.339 "is_configured": false, 00:19:36.339 "data_offset": 2048, 00:19:36.339 "data_size": 63488 00:19:36.339 }, 00:19:36.339 { 00:19:36.339 "name": "BaseBdev2", 00:19:36.339 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:36.339 "is_configured": true, 00:19:36.339 "data_offset": 2048, 00:19:36.339 "data_size": 63488 00:19:36.339 } 00:19:36.339 ] 00:19:36.339 }' 00:19:36.339 04:59:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.339 04:59:59 -- common/autotest_common.sh@10 -- # set +x 00:19:36.598 04:59:59 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:36.598 04:59:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:36.598 04:59:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:36.598 04:59:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:36.598 04:59:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:36.598 04:59:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.598 04:59:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.857 05:00:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:36.857 "name": "raid_bdev1", 00:19:36.857 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:36.857 "strip_size_kb": 0, 00:19:36.857 "state": "online", 00:19:36.857 "raid_level": "raid1", 00:19:36.857 "superblock": true, 00:19:36.857 "num_base_bdevs": 2, 00:19:36.857 "num_base_bdevs_discovered": 1, 00:19:36.857 "num_base_bdevs_operational": 1, 00:19:36.857 "base_bdevs_list": [ 00:19:36.857 { 00:19:36.857 "name": null, 00:19:36.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.857 "is_configured": false, 00:19:36.857 "data_offset": 2048, 00:19:36.857 "data_size": 63488 00:19:36.857 }, 00:19:36.857 { 00:19:36.857 "name": "BaseBdev2", 00:19:36.857 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:36.857 "is_configured": true, 00:19:36.857 "data_offset": 2048, 00:19:36.857 "data_size": 63488 00:19:36.857 } 00:19:36.857 ] 00:19:36.857 }' 00:19:36.857 05:00:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:36.857 05:00:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:36.857 05:00:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:36.857 05:00:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:36.857 05:00:00 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:37.117 [2024-11-18 05:00:00.420208] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:37.117 [2024-11-18 05:00:00.420267] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:37.117 [2024-11-18 05:00:00.433068] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2ce0 00:19:37.117 [2024-11-18 05:00:00.435226] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:37.117 05:00:00 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:38.053 05:00:01 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.053 05:00:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:38.053 05:00:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:38.053 05:00:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:38.053 05:00:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:38.053 05:00:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.053 05:00:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.312 05:00:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:38.313 "name": "raid_bdev1", 00:19:38.313 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:38.313 "strip_size_kb": 0, 00:19:38.313 "state": "online", 00:19:38.313 "raid_level": "raid1", 00:19:38.313 "superblock": true, 00:19:38.313 "num_base_bdevs": 2, 00:19:38.313 "num_base_bdevs_discovered": 2, 00:19:38.313 "num_base_bdevs_operational": 2, 00:19:38.313 "process": { 00:19:38.313 "type": "rebuild", 00:19:38.313 "target": "spare", 00:19:38.313 "progress": { 00:19:38.313 "blocks": 24576, 00:19:38.313 "percent": 38 00:19:38.313 } 00:19:38.313 }, 00:19:38.313 "base_bdevs_list": [ 00:19:38.313 { 00:19:38.313 "name": "spare", 00:19:38.313 "uuid": "ed83f1f7-2347-5f0e-a81e-a281bafce2ff", 00:19:38.313 "is_configured": true, 00:19:38.313 "data_offset": 2048, 00:19:38.313 "data_size": 63488 00:19:38.313 }, 00:19:38.313 { 00:19:38.313 "name": "BaseBdev2", 00:19:38.313 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:38.313 "is_configured": true, 00:19:38.313 "data_offset": 2048, 00:19:38.313 "data_size": 63488 00:19:38.313 } 00:19:38.313 ] 00:19:38.313 }' 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:19:38.313 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@657 -- # local timeout=375 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.313 05:00:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.571 05:00:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:38.571 "name": "raid_bdev1", 00:19:38.571 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:38.571 "strip_size_kb": 0, 00:19:38.571 "state": "online", 00:19:38.571 "raid_level": "raid1", 00:19:38.571 "superblock": true, 00:19:38.571 "num_base_bdevs": 2, 00:19:38.571 "num_base_bdevs_discovered": 2, 00:19:38.571 "num_base_bdevs_operational": 2, 00:19:38.571 "process": { 00:19:38.571 "type": "rebuild", 00:19:38.571 "target": "spare", 00:19:38.571 "progress": { 00:19:38.571 "blocks": 30720, 00:19:38.571 "percent": 48 00:19:38.571 } 00:19:38.571 }, 00:19:38.571 "base_bdevs_list": [ 00:19:38.571 { 00:19:38.571 "name": "spare", 00:19:38.571 "uuid": "ed83f1f7-2347-5f0e-a81e-a281bafce2ff", 00:19:38.571 "is_configured": true, 00:19:38.571 "data_offset": 2048, 00:19:38.571 "data_size": 63488 00:19:38.571 }, 00:19:38.571 { 00:19:38.571 "name": "BaseBdev2", 00:19:38.571 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:38.571 "is_configured": true, 00:19:38.571 "data_offset": 2048, 00:19:38.571 "data_size": 63488 00:19:38.571 } 00:19:38.571 ] 00:19:38.571 }' 00:19:38.571 05:00:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:38.571 05:00:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.571 05:00:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:38.571 05:00:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.571 05:00:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:39.507 05:00:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:39.507 05:00:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.507 05:00:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:39.507 05:00:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:39.507 05:00:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:39.507 05:00:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:39.507 05:00:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.507 05:00:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.767 05:00:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:39.767 "name": "raid_bdev1", 00:19:39.767 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:39.767 "strip_size_kb": 0, 00:19:39.767 "state": "online", 00:19:39.767 "raid_level": "raid1", 00:19:39.767 "superblock": true, 00:19:39.767 "num_base_bdevs": 2, 00:19:39.767 "num_base_bdevs_discovered": 2, 00:19:39.767 "num_base_bdevs_operational": 2, 00:19:39.767 "process": { 00:19:39.767 "type": "rebuild", 00:19:39.767 "target": "spare", 00:19:39.767 "progress": { 00:19:39.767 "blocks": 55296, 00:19:39.767 "percent": 87 00:19:39.767 } 00:19:39.767 }, 00:19:39.767 "base_bdevs_list": [ 00:19:39.767 { 00:19:39.767 "name": "spare", 00:19:39.767 "uuid": "ed83f1f7-2347-5f0e-a81e-a281bafce2ff", 00:19:39.767 "is_configured": true, 00:19:39.767 "data_offset": 2048, 00:19:39.767 "data_size": 63488 00:19:39.767 }, 00:19:39.767 { 00:19:39.767 "name": "BaseBdev2", 00:19:39.767 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:39.767 "is_configured": true, 00:19:39.767 "data_offset": 2048, 00:19:39.767 "data_size": 63488 00:19:39.767 } 00:19:39.767 ] 00:19:39.767 }' 00:19:39.767 05:00:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:39.767 05:00:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.767 05:00:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:39.767 05:00:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.767 05:00:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:40.334 [2024-11-18 05:00:03.549891] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:40.334 [2024-11-18 05:00:03.550006] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:40.334 [2024-11-18 05:00:03.550162] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.901 05:00:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:40.902 05:00:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.902 05:00:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:40.902 05:00:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:40.902 05:00:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:40.902 05:00:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:40.902 05:00:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.902 05:00:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:41.164 "name": "raid_bdev1", 00:19:41.164 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:41.164 "strip_size_kb": 0, 00:19:41.164 "state": "online", 00:19:41.164 "raid_level": "raid1", 00:19:41.164 "superblock": true, 00:19:41.164 "num_base_bdevs": 2, 00:19:41.164 "num_base_bdevs_discovered": 2, 00:19:41.164 "num_base_bdevs_operational": 2, 00:19:41.164 "base_bdevs_list": [ 00:19:41.164 { 00:19:41.164 "name": "spare", 00:19:41.164 "uuid": "ed83f1f7-2347-5f0e-a81e-a281bafce2ff", 00:19:41.164 "is_configured": true, 00:19:41.164 "data_offset": 2048, 00:19:41.164 "data_size": 63488 00:19:41.164 }, 00:19:41.164 { 00:19:41.164 "name": "BaseBdev2", 00:19:41.164 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:41.164 "is_configured": true, 00:19:41.164 "data_offset": 2048, 00:19:41.164 "data_size": 63488 00:19:41.164 } 00:19:41.164 ] 00:19:41.164 }' 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@660 -- # break 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.164 05:00:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:41.423 "name": "raid_bdev1", 00:19:41.423 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:41.423 "strip_size_kb": 0, 00:19:41.423 "state": "online", 00:19:41.423 "raid_level": "raid1", 00:19:41.423 "superblock": true, 00:19:41.423 "num_base_bdevs": 2, 00:19:41.423 "num_base_bdevs_discovered": 2, 00:19:41.423 "num_base_bdevs_operational": 2, 00:19:41.423 "base_bdevs_list": [ 00:19:41.423 { 00:19:41.423 "name": "spare", 00:19:41.423 "uuid": "ed83f1f7-2347-5f0e-a81e-a281bafce2ff", 00:19:41.423 "is_configured": true, 00:19:41.423 "data_offset": 2048, 00:19:41.423 "data_size": 63488 00:19:41.423 }, 00:19:41.423 { 00:19:41.423 "name": "BaseBdev2", 00:19:41.423 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:41.423 "is_configured": true, 00:19:41.423 "data_offset": 2048, 00:19:41.423 "data_size": 63488 00:19:41.423 } 00:19:41.423 ] 00:19:41.423 }' 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.423 "name": "raid_bdev1", 00:19:41.423 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:41.423 "strip_size_kb": 0, 00:19:41.423 "state": "online", 00:19:41.423 "raid_level": "raid1", 00:19:41.423 "superblock": true, 00:19:41.423 "num_base_bdevs": 2, 00:19:41.423 "num_base_bdevs_discovered": 2, 00:19:41.423 "num_base_bdevs_operational": 2, 00:19:41.423 "base_bdevs_list": [ 00:19:41.423 { 00:19:41.423 "name": "spare", 00:19:41.423 "uuid": "ed83f1f7-2347-5f0e-a81e-a281bafce2ff", 00:19:41.423 "is_configured": true, 00:19:41.423 "data_offset": 2048, 00:19:41.423 "data_size": 63488 00:19:41.423 }, 00:19:41.423 { 00:19:41.423 "name": "BaseBdev2", 00:19:41.423 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:41.423 "is_configured": true, 00:19:41.423 "data_offset": 2048, 00:19:41.423 "data_size": 63488 00:19:41.423 } 00:19:41.423 ] 00:19:41.423 }' 00:19:41.423 05:00:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.682 05:00:04 -- common/autotest_common.sh@10 -- # set +x 00:19:41.941 05:00:05 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:41.941 [2024-11-18 05:00:05.456485] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.941 [2024-11-18 05:00:05.456549] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.941 [2024-11-18 05:00:05.456633] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.941 [2024-11-18 05:00:05.456781] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.941 [2024-11-18 05:00:05.456801] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:19:42.200 05:00:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.200 05:00:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:42.200 05:00:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:42.200 05:00:05 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:42.200 05:00:05 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:42.200 05:00:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:42.200 05:00:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:42.200 05:00:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:42.200 05:00:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:42.200 05:00:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:42.200 05:00:05 -- bdev/nbd_common.sh@12 -- # local i 00:19:42.200 05:00:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:42.200 05:00:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:42.200 05:00:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:42.459 /dev/nbd0 00:19:42.459 05:00:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:42.459 05:00:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:42.459 05:00:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:42.459 05:00:05 -- common/autotest_common.sh@867 -- # local i 00:19:42.459 05:00:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:42.459 05:00:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:42.459 05:00:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:42.459 05:00:05 -- common/autotest_common.sh@871 -- # break 00:19:42.459 05:00:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:42.459 05:00:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:42.459 05:00:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.459 1+0 records in 00:19:42.459 1+0 records out 00:19:42.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284624 s, 14.4 MB/s 00:19:42.459 05:00:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.459 05:00:05 -- common/autotest_common.sh@884 -- # size=4096 00:19:42.459 05:00:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.459 05:00:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:42.459 05:00:05 -- common/autotest_common.sh@887 -- # return 0 00:19:42.459 05:00:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.459 05:00:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:42.459 05:00:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:42.718 /dev/nbd1 00:19:42.718 05:00:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:42.718 05:00:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:42.718 05:00:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:42.718 05:00:06 -- common/autotest_common.sh@867 -- # local i 00:19:42.718 05:00:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:42.718 05:00:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:42.718 05:00:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:42.718 05:00:06 -- common/autotest_common.sh@871 -- # break 00:19:42.718 05:00:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:42.718 05:00:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:42.718 05:00:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.718 1+0 records in 00:19:42.718 1+0 records out 00:19:42.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258637 s, 15.8 MB/s 00:19:42.718 05:00:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.718 05:00:06 -- common/autotest_common.sh@884 -- # size=4096 00:19:42.718 05:00:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.718 05:00:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:42.718 05:00:06 -- common/autotest_common.sh@887 -- # return 0 00:19:42.718 05:00:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.718 05:00:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:42.718 05:00:06 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:42.977 05:00:06 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:42.978 05:00:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:42.978 05:00:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:42.978 05:00:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:42.978 05:00:06 -- bdev/nbd_common.sh@51 -- # local i 00:19:42.978 05:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:42.978 05:00:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:43.236 05:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:43.236 05:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:43.236 05:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:43.236 05:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.236 05:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.236 05:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:43.236 05:00:06 -- bdev/nbd_common.sh@41 -- # break 00:19:43.236 05:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.236 05:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.236 05:00:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:43.495 05:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:43.496 05:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:43.496 05:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:43.496 05:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.496 05:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.496 05:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:43.496 05:00:06 -- bdev/nbd_common.sh@41 -- # break 00:19:43.496 05:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.496 05:00:06 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:19:43.496 05:00:06 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:43.496 05:00:06 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:19:43.496 05:00:06 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:43.754 05:00:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:43.754 [2024-11-18 05:00:07.260827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:43.754 [2024-11-18 05:00:07.260945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.754 [2024-11-18 05:00:07.260980] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:19:43.754 [2024-11-18 05:00:07.260994] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.754 [2024-11-18 05:00:07.263543] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.754 [2024-11-18 05:00:07.263596] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:43.754 [2024-11-18 05:00:07.263712] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:43.754 [2024-11-18 05:00:07.263784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.754 BaseBdev1 00:19:44.013 05:00:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:44.013 05:00:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:19:44.013 05:00:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:19:44.013 05:00:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:44.272 [2024-11-18 05:00:07.704961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:44.272 [2024-11-18 05:00:07.705056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.272 [2024-11-18 05:00:07.705090] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:19:44.272 [2024-11-18 05:00:07.705105] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.272 [2024-11-18 05:00:07.705652] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.272 [2024-11-18 05:00:07.705686] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:44.272 [2024-11-18 05:00:07.705807] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:19:44.273 [2024-11-18 05:00:07.705822] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:19:44.273 [2024-11-18 05:00:07.705850] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.273 [2024-11-18 05:00:07.705873] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state configuring 00:19:44.273 [2024-11-18 05:00:07.705941] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.273 BaseBdev2 00:19:44.273 05:00:07 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:44.531 05:00:07 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:44.790 [2024-11-18 05:00:08.141103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:44.790 [2024-11-18 05:00:08.141227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.790 [2024-11-18 05:00:08.141260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:19:44.790 [2024-11-18 05:00:08.141277] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.790 [2024-11-18 05:00:08.141848] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.790 [2024-11-18 05:00:08.141902] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:44.790 [2024-11-18 05:00:08.142032] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:19:44.790 [2024-11-18 05:00:08.142086] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.790 spare 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.790 05:00:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.791 [2024-11-18 05:00:08.242205] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:19:44.791 [2024-11-18 05:00:08.242256] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:44.791 [2024-11-18 05:00:08.242426] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1390 00:19:44.791 [2024-11-18 05:00:08.242902] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:19:44.791 [2024-11-18 05:00:08.242928] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:19:44.791 [2024-11-18 05:00:08.243106] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.049 05:00:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.049 "name": "raid_bdev1", 00:19:45.049 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:45.049 "strip_size_kb": 0, 00:19:45.049 "state": "online", 00:19:45.049 "raid_level": "raid1", 00:19:45.049 "superblock": true, 00:19:45.049 "num_base_bdevs": 2, 00:19:45.049 "num_base_bdevs_discovered": 2, 00:19:45.049 "num_base_bdevs_operational": 2, 00:19:45.049 "base_bdevs_list": [ 00:19:45.049 { 00:19:45.049 "name": "spare", 00:19:45.049 "uuid": "ed83f1f7-2347-5f0e-a81e-a281bafce2ff", 00:19:45.049 "is_configured": true, 00:19:45.049 "data_offset": 2048, 00:19:45.049 "data_size": 63488 00:19:45.049 }, 00:19:45.049 { 00:19:45.049 "name": "BaseBdev2", 00:19:45.049 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:45.049 "is_configured": true, 00:19:45.049 "data_offset": 2048, 00:19:45.049 "data_size": 63488 00:19:45.049 } 00:19:45.049 ] 00:19:45.049 }' 00:19:45.049 05:00:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.049 05:00:08 -- common/autotest_common.sh@10 -- # set +x 00:19:45.308 05:00:08 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:45.308 05:00:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:45.308 05:00:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:45.308 05:00:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:45.308 05:00:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:45.309 05:00:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.309 05:00:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.568 05:00:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:45.568 "name": "raid_bdev1", 00:19:45.568 "uuid": "b20d9b1d-35c5-4d07-ab5a-f9610e8477a8", 00:19:45.568 "strip_size_kb": 0, 00:19:45.568 "state": "online", 00:19:45.568 "raid_level": "raid1", 00:19:45.568 "superblock": true, 00:19:45.568 "num_base_bdevs": 2, 00:19:45.568 "num_base_bdevs_discovered": 2, 00:19:45.568 "num_base_bdevs_operational": 2, 00:19:45.568 "base_bdevs_list": [ 00:19:45.568 { 00:19:45.568 "name": "spare", 00:19:45.568 "uuid": "ed83f1f7-2347-5f0e-a81e-a281bafce2ff", 00:19:45.568 "is_configured": true, 00:19:45.568 "data_offset": 2048, 00:19:45.568 "data_size": 63488 00:19:45.568 }, 00:19:45.568 { 00:19:45.568 "name": "BaseBdev2", 00:19:45.568 "uuid": "af92ad6f-9309-5d8e-97ca-e339f1f90544", 00:19:45.568 "is_configured": true, 00:19:45.568 "data_offset": 2048, 00:19:45.568 "data_size": 63488 00:19:45.568 } 00:19:45.568 ] 00:19:45.568 }' 00:19:45.568 05:00:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:45.568 05:00:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:45.568 05:00:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:45.568 05:00:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:45.568 05:00:08 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.568 05:00:08 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:45.832 05:00:09 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.832 05:00:09 -- bdev/bdev_raid.sh@709 -- # killprocess 78707 00:19:45.832 05:00:09 -- common/autotest_common.sh@936 -- # '[' -z 78707 ']' 00:19:45.832 05:00:09 -- common/autotest_common.sh@940 -- # kill -0 78707 00:19:45.832 05:00:09 -- common/autotest_common.sh@941 -- # uname 00:19:45.832 05:00:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:45.832 05:00:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78707 00:19:45.832 05:00:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:45.832 05:00:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:45.832 killing process with pid 78707 00:19:45.832 05:00:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78707' 00:19:45.832 05:00:09 -- common/autotest_common.sh@955 -- # kill 78707 00:19:45.832 Received shutdown signal, test time was about 60.000000 seconds 00:19:45.832 00:19:45.832 Latency(us) 00:19:45.832 [2024-11-18T05:00:09.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.832 [2024-11-18T05:00:09.356Z] =================================================================================================================== 00:19:45.832 [2024-11-18T05:00:09.356Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:45.832 [2024-11-18 05:00:09.131855] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.832 05:00:09 -- common/autotest_common.sh@960 -- # wait 78707 00:19:45.832 [2024-11-18 05:00:09.131943] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.832 [2024-11-18 05:00:09.132011] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.832 [2024-11-18 05:00:09.132031] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:19:45.832 [2024-11-18 05:00:09.325675] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:47.213 00:19:47.213 real 0m24.167s 00:19:47.213 user 0m32.186s 00:19:47.213 sys 0m4.562s 00:19:47.213 05:00:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:47.213 05:00:10 -- common/autotest_common.sh@10 -- # set +x 00:19:47.213 ************************************ 00:19:47.213 END TEST raid_rebuild_test_sb 00:19:47.213 ************************************ 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:19:47.213 05:00:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:47.213 05:00:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:47.213 05:00:10 -- common/autotest_common.sh@10 -- # set +x 00:19:47.213 ************************************ 00:19:47.213 START TEST raid_rebuild_test_io 00:19:47.213 ************************************ 00:19:47.213 05:00:10 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false true 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@544 -- # raid_pid=79290 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@545 -- # waitforlisten 79290 /var/tmp/spdk-raid.sock 00:19:47.213 05:00:10 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:47.213 05:00:10 -- common/autotest_common.sh@829 -- # '[' -z 79290 ']' 00:19:47.213 05:00:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:47.213 05:00:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:47.213 05:00:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:47.213 05:00:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.213 05:00:10 -- common/autotest_common.sh@10 -- # set +x 00:19:47.213 [2024-11-18 05:00:10.423463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:47.213 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:47.213 Zero copy mechanism will not be used. 00:19:47.213 [2024-11-18 05:00:10.423645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79290 ] 00:19:47.213 [2024-11-18 05:00:10.591994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.472 [2024-11-18 05:00:10.754189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.472 [2024-11-18 05:00:10.901928] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.049 05:00:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.049 05:00:11 -- common/autotest_common.sh@862 -- # return 0 00:19:48.049 05:00:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:48.049 05:00:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:48.049 05:00:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:48.330 BaseBdev1 00:19:48.330 05:00:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:48.330 05:00:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:48.330 05:00:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:48.330 BaseBdev2 00:19:48.330 05:00:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:48.601 spare_malloc 00:19:48.601 05:00:12 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:48.860 spare_delay 00:19:48.860 05:00:12 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:49.120 [2024-11-18 05:00:12.482954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:49.120 [2024-11-18 05:00:12.483076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.120 [2024-11-18 05:00:12.483109] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:19:49.120 [2024-11-18 05:00:12.483125] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.120 [2024-11-18 05:00:12.485378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.120 [2024-11-18 05:00:12.485438] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:49.120 spare 00:19:49.120 05:00:12 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:49.379 [2024-11-18 05:00:12.683013] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:49.379 [2024-11-18 05:00:12.685111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.379 [2024-11-18 05:00:12.685228] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:19:49.379 [2024-11-18 05:00:12.685249] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:49.379 [2024-11-18 05:00:12.685383] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:19:49.379 [2024-11-18 05:00:12.685812] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:19:49.379 [2024-11-18 05:00:12.685839] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:19:49.379 [2024-11-18 05:00:12.686018] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.379 05:00:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.638 05:00:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.638 "name": "raid_bdev1", 00:19:49.638 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:49.638 "strip_size_kb": 0, 00:19:49.638 "state": "online", 00:19:49.638 "raid_level": "raid1", 00:19:49.638 "superblock": false, 00:19:49.638 "num_base_bdevs": 2, 00:19:49.638 "num_base_bdevs_discovered": 2, 00:19:49.638 "num_base_bdevs_operational": 2, 00:19:49.638 "base_bdevs_list": [ 00:19:49.638 { 00:19:49.638 "name": "BaseBdev1", 00:19:49.638 "uuid": "bd5ce044-a2ed-49b3-b20e-b8bea944107e", 00:19:49.638 "is_configured": true, 00:19:49.638 "data_offset": 0, 00:19:49.638 "data_size": 65536 00:19:49.638 }, 00:19:49.638 { 00:19:49.638 "name": "BaseBdev2", 00:19:49.638 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:49.638 "is_configured": true, 00:19:49.638 "data_offset": 0, 00:19:49.638 "data_size": 65536 00:19:49.638 } 00:19:49.638 ] 00:19:49.638 }' 00:19:49.638 05:00:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.638 05:00:12 -- common/autotest_common.sh@10 -- # set +x 00:19:49.897 05:00:13 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:49.897 05:00:13 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:50.157 [2024-11-18 05:00:13.447421] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:50.157 05:00:13 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:50.157 05:00:13 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.157 05:00:13 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:50.157 05:00:13 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:50.157 05:00:13 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:19:50.157 05:00:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:50.157 05:00:13 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:50.417 [2024-11-18 05:00:13.758609] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:19:50.417 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:50.417 Zero copy mechanism will not be used. 00:19:50.417 Running I/O for 60 seconds... 00:19:50.417 [2024-11-18 05:00:13.847924] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:50.417 [2024-11-18 05:00:13.860927] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.417 05:00:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.677 05:00:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:50.677 "name": "raid_bdev1", 00:19:50.677 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:50.677 "strip_size_kb": 0, 00:19:50.677 "state": "online", 00:19:50.677 "raid_level": "raid1", 00:19:50.677 "superblock": false, 00:19:50.677 "num_base_bdevs": 2, 00:19:50.677 "num_base_bdevs_discovered": 1, 00:19:50.677 "num_base_bdevs_operational": 1, 00:19:50.677 "base_bdevs_list": [ 00:19:50.677 { 00:19:50.677 "name": null, 00:19:50.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.677 "is_configured": false, 00:19:50.677 "data_offset": 0, 00:19:50.677 "data_size": 65536 00:19:50.677 }, 00:19:50.677 { 00:19:50.677 "name": "BaseBdev2", 00:19:50.677 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:50.677 "is_configured": true, 00:19:50.677 "data_offset": 0, 00:19:50.677 "data_size": 65536 00:19:50.677 } 00:19:50.677 ] 00:19:50.677 }' 00:19:50.677 05:00:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:50.677 05:00:14 -- common/autotest_common.sh@10 -- # set +x 00:19:50.936 05:00:14 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.195 [2024-11-18 05:00:14.589424] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:51.195 [2024-11-18 05:00:14.589488] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.195 [2024-11-18 05:00:14.635190] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:51.195 05:00:14 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:51.195 [2024-11-18 05:00:14.637505] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.455 [2024-11-18 05:00:14.746585] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:51.455 [2024-11-18 05:00:14.747001] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:51.455 [2024-11-18 05:00:14.962613] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:51.455 [2024-11-18 05:00:14.962896] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:52.023 [2024-11-18 05:00:15.285404] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:52.023 [2024-11-18 05:00:15.285904] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:52.023 [2024-11-18 05:00:15.495344] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:52.282 05:00:15 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.282 05:00:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:52.282 05:00:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:52.282 05:00:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:52.282 05:00:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:52.282 05:00:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.282 05:00:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.282 [2024-11-18 05:00:15.718279] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:52.542 [2024-11-18 05:00:15.826534] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:52.542 05:00:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:52.542 "name": "raid_bdev1", 00:19:52.542 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:52.542 "strip_size_kb": 0, 00:19:52.542 "state": "online", 00:19:52.542 "raid_level": "raid1", 00:19:52.542 "superblock": false, 00:19:52.542 "num_base_bdevs": 2, 00:19:52.542 "num_base_bdevs_discovered": 2, 00:19:52.542 "num_base_bdevs_operational": 2, 00:19:52.542 "process": { 00:19:52.542 "type": "rebuild", 00:19:52.542 "target": "spare", 00:19:52.542 "progress": { 00:19:52.542 "blocks": 16384, 00:19:52.542 "percent": 25 00:19:52.542 } 00:19:52.542 }, 00:19:52.542 "base_bdevs_list": [ 00:19:52.542 { 00:19:52.542 "name": "spare", 00:19:52.542 "uuid": "75bd8757-5e8d-5c92-a20d-6e5157a6f46f", 00:19:52.542 "is_configured": true, 00:19:52.542 "data_offset": 0, 00:19:52.542 "data_size": 65536 00:19:52.542 }, 00:19:52.542 { 00:19:52.542 "name": "BaseBdev2", 00:19:52.542 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:52.542 "is_configured": true, 00:19:52.542 "data_offset": 0, 00:19:52.542 "data_size": 65536 00:19:52.542 } 00:19:52.542 ] 00:19:52.542 }' 00:19:52.542 05:00:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:52.542 05:00:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.542 05:00:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:52.542 05:00:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.542 05:00:15 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:52.802 [2024-11-18 05:00:16.121276] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.802 [2024-11-18 05:00:16.153291] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:52.802 [2024-11-18 05:00:16.254059] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:52.802 [2024-11-18 05:00:16.255825] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.802 [2024-11-18 05:00:16.292778] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.062 05:00:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.322 05:00:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.322 "name": "raid_bdev1", 00:19:53.322 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:53.322 "strip_size_kb": 0, 00:19:53.322 "state": "online", 00:19:53.322 "raid_level": "raid1", 00:19:53.322 "superblock": false, 00:19:53.322 "num_base_bdevs": 2, 00:19:53.322 "num_base_bdevs_discovered": 1, 00:19:53.322 "num_base_bdevs_operational": 1, 00:19:53.322 "base_bdevs_list": [ 00:19:53.322 { 00:19:53.322 "name": null, 00:19:53.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.322 "is_configured": false, 00:19:53.322 "data_offset": 0, 00:19:53.322 "data_size": 65536 00:19:53.322 }, 00:19:53.322 { 00:19:53.322 "name": "BaseBdev2", 00:19:53.322 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:53.322 "is_configured": true, 00:19:53.322 "data_offset": 0, 00:19:53.322 "data_size": 65536 00:19:53.322 } 00:19:53.322 ] 00:19:53.322 }' 00:19:53.322 05:00:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.322 05:00:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.581 05:00:16 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.581 05:00:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:53.581 05:00:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:53.581 05:00:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:53.581 05:00:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:53.581 05:00:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.581 05:00:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.840 05:00:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:53.840 "name": "raid_bdev1", 00:19:53.840 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:53.840 "strip_size_kb": 0, 00:19:53.840 "state": "online", 00:19:53.840 "raid_level": "raid1", 00:19:53.840 "superblock": false, 00:19:53.840 "num_base_bdevs": 2, 00:19:53.840 "num_base_bdevs_discovered": 1, 00:19:53.840 "num_base_bdevs_operational": 1, 00:19:53.840 "base_bdevs_list": [ 00:19:53.840 { 00:19:53.840 "name": null, 00:19:53.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.840 "is_configured": false, 00:19:53.840 "data_offset": 0, 00:19:53.840 "data_size": 65536 00:19:53.840 }, 00:19:53.840 { 00:19:53.840 "name": "BaseBdev2", 00:19:53.840 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:53.840 "is_configured": true, 00:19:53.840 "data_offset": 0, 00:19:53.840 "data_size": 65536 00:19:53.840 } 00:19:53.840 ] 00:19:53.840 }' 00:19:53.840 05:00:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:53.840 05:00:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:53.840 05:00:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:53.840 05:00:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:53.840 05:00:17 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:54.100 [2024-11-18 05:00:17.372968] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:54.100 [2024-11-18 05:00:17.373013] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:54.100 [2024-11-18 05:00:17.424963] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:19:54.100 05:00:17 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:54.100 [2024-11-18 05:00:17.427372] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.100 [2024-11-18 05:00:17.541482] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:54.100 [2024-11-18 05:00:17.541985] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:54.358 [2024-11-18 05:00:17.759901] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:54.358 [2024-11-18 05:00:17.760438] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:54.617 [2024-11-18 05:00:18.083892] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:54.875 [2024-11-18 05:00:18.291741] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:54.875 [2024-11-18 05:00:18.291956] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:55.134 05:00:18 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.134 05:00:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:55.134 05:00:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:55.134 05:00:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:55.134 05:00:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:55.134 05:00:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.134 05:00:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.134 [2024-11-18 05:00:18.631767] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.393 "name": "raid_bdev1", 00:19:55.393 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:55.393 "strip_size_kb": 0, 00:19:55.393 "state": "online", 00:19:55.393 "raid_level": "raid1", 00:19:55.393 "superblock": false, 00:19:55.393 "num_base_bdevs": 2, 00:19:55.393 "num_base_bdevs_discovered": 2, 00:19:55.393 "num_base_bdevs_operational": 2, 00:19:55.393 "process": { 00:19:55.393 "type": "rebuild", 00:19:55.393 "target": "spare", 00:19:55.393 "progress": { 00:19:55.393 "blocks": 14336, 00:19:55.393 "percent": 21 00:19:55.393 } 00:19:55.393 }, 00:19:55.393 "base_bdevs_list": [ 00:19:55.393 { 00:19:55.393 "name": "spare", 00:19:55.393 "uuid": "75bd8757-5e8d-5c92-a20d-6e5157a6f46f", 00:19:55.393 "is_configured": true, 00:19:55.393 "data_offset": 0, 00:19:55.393 "data_size": 65536 00:19:55.393 }, 00:19:55.393 { 00:19:55.393 "name": "BaseBdev2", 00:19:55.393 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:55.393 "is_configured": true, 00:19:55.393 "data_offset": 0, 00:19:55.393 "data_size": 65536 00:19:55.393 } 00:19:55.393 ] 00:19:55.393 }' 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@657 -- # local timeout=392 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.393 [2024-11-18 05:00:18.854825] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:55.393 [2024-11-18 05:00:18.855274] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.393 "name": "raid_bdev1", 00:19:55.393 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:55.393 "strip_size_kb": 0, 00:19:55.393 "state": "online", 00:19:55.393 "raid_level": "raid1", 00:19:55.393 "superblock": false, 00:19:55.393 "num_base_bdevs": 2, 00:19:55.393 "num_base_bdevs_discovered": 2, 00:19:55.393 "num_base_bdevs_operational": 2, 00:19:55.393 "process": { 00:19:55.393 "type": "rebuild", 00:19:55.393 "target": "spare", 00:19:55.393 "progress": { 00:19:55.393 "blocks": 16384, 00:19:55.393 "percent": 25 00:19:55.393 } 00:19:55.393 }, 00:19:55.393 "base_bdevs_list": [ 00:19:55.393 { 00:19:55.393 "name": "spare", 00:19:55.393 "uuid": "75bd8757-5e8d-5c92-a20d-6e5157a6f46f", 00:19:55.393 "is_configured": true, 00:19:55.393 "data_offset": 0, 00:19:55.393 "data_size": 65536 00:19:55.393 }, 00:19:55.393 { 00:19:55.393 "name": "BaseBdev2", 00:19:55.393 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:55.393 "is_configured": true, 00:19:55.393 "data_offset": 0, 00:19:55.393 "data_size": 65536 00:19:55.393 } 00:19:55.393 ] 00:19:55.393 }' 00:19:55.393 05:00:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:55.652 05:00:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.652 05:00:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:55.652 05:00:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.652 05:00:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:56.218 [2024-11-18 05:00:19.601093] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:56.477 05:00:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:56.477 05:00:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.477 05:00:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:56.477 05:00:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:56.477 05:00:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:56.477 05:00:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:56.477 05:00:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.477 05:00:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.736 05:00:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:56.736 "name": "raid_bdev1", 00:19:56.736 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:56.736 "strip_size_kb": 0, 00:19:56.736 "state": "online", 00:19:56.736 "raid_level": "raid1", 00:19:56.736 "superblock": false, 00:19:56.736 "num_base_bdevs": 2, 00:19:56.736 "num_base_bdevs_discovered": 2, 00:19:56.736 "num_base_bdevs_operational": 2, 00:19:56.736 "process": { 00:19:56.736 "type": "rebuild", 00:19:56.736 "target": "spare", 00:19:56.736 "progress": { 00:19:56.736 "blocks": 36864, 00:19:56.736 "percent": 56 00:19:56.736 } 00:19:56.736 }, 00:19:56.736 "base_bdevs_list": [ 00:19:56.736 { 00:19:56.736 "name": "spare", 00:19:56.736 "uuid": "75bd8757-5e8d-5c92-a20d-6e5157a6f46f", 00:19:56.736 "is_configured": true, 00:19:56.736 "data_offset": 0, 00:19:56.736 "data_size": 65536 00:19:56.736 }, 00:19:56.736 { 00:19:56.736 "name": "BaseBdev2", 00:19:56.736 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:56.736 "is_configured": true, 00:19:56.736 "data_offset": 0, 00:19:56.736 "data_size": 65536 00:19:56.736 } 00:19:56.736 ] 00:19:56.736 }' 00:19:56.736 05:00:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:56.736 05:00:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.736 05:00:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:56.736 05:00:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.736 05:00:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:57.303 [2024-11-18 05:00:20.540855] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:57.303 [2024-11-18 05:00:20.541300] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:57.303 [2024-11-18 05:00:20.692934] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:57.871 [2024-11-18 05:00:21.131697] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:57.871 05:00:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:57.871 05:00:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.871 05:00:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:57.871 05:00:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:57.871 05:00:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:57.872 05:00:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:57.872 05:00:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.872 05:00:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.131 05:00:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:58.131 "name": "raid_bdev1", 00:19:58.131 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:58.131 "strip_size_kb": 0, 00:19:58.131 "state": "online", 00:19:58.131 "raid_level": "raid1", 00:19:58.131 "superblock": false, 00:19:58.131 "num_base_bdevs": 2, 00:19:58.131 "num_base_bdevs_discovered": 2, 00:19:58.131 "num_base_bdevs_operational": 2, 00:19:58.131 "process": { 00:19:58.131 "type": "rebuild", 00:19:58.131 "target": "spare", 00:19:58.131 "progress": { 00:19:58.131 "blocks": 57344, 00:19:58.131 "percent": 87 00:19:58.131 } 00:19:58.131 }, 00:19:58.131 "base_bdevs_list": [ 00:19:58.131 { 00:19:58.131 "name": "spare", 00:19:58.131 "uuid": "75bd8757-5e8d-5c92-a20d-6e5157a6f46f", 00:19:58.131 "is_configured": true, 00:19:58.131 "data_offset": 0, 00:19:58.131 "data_size": 65536 00:19:58.131 }, 00:19:58.131 { 00:19:58.131 "name": "BaseBdev2", 00:19:58.131 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:58.131 "is_configured": true, 00:19:58.131 "data_offset": 0, 00:19:58.131 "data_size": 65536 00:19:58.131 } 00:19:58.132 ] 00:19:58.132 }' 00:19:58.132 05:00:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:58.132 05:00:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.132 05:00:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:58.132 05:00:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.132 05:00:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:58.390 [2024-11-18 05:00:21.794594] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:58.390 [2024-11-18 05:00:21.894681] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:58.390 [2024-11-18 05:00:21.896046] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:59.330 "name": "raid_bdev1", 00:19:59.330 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:59.330 "strip_size_kb": 0, 00:19:59.330 "state": "online", 00:19:59.330 "raid_level": "raid1", 00:19:59.330 "superblock": false, 00:19:59.330 "num_base_bdevs": 2, 00:19:59.330 "num_base_bdevs_discovered": 2, 00:19:59.330 "num_base_bdevs_operational": 2, 00:19:59.330 "base_bdevs_list": [ 00:19:59.330 { 00:19:59.330 "name": "spare", 00:19:59.330 "uuid": "75bd8757-5e8d-5c92-a20d-6e5157a6f46f", 00:19:59.330 "is_configured": true, 00:19:59.330 "data_offset": 0, 00:19:59.330 "data_size": 65536 00:19:59.330 }, 00:19:59.330 { 00:19:59.330 "name": "BaseBdev2", 00:19:59.330 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:59.330 "is_configured": true, 00:19:59.330 "data_offset": 0, 00:19:59.330 "data_size": 65536 00:19:59.330 } 00:19:59.330 ] 00:19:59.330 }' 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@660 -- # break 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.330 05:00:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.589 05:00:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:59.589 "name": "raid_bdev1", 00:19:59.589 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:19:59.589 "strip_size_kb": 0, 00:19:59.589 "state": "online", 00:19:59.589 "raid_level": "raid1", 00:19:59.589 "superblock": false, 00:19:59.589 "num_base_bdevs": 2, 00:19:59.589 "num_base_bdevs_discovered": 2, 00:19:59.589 "num_base_bdevs_operational": 2, 00:19:59.589 "base_bdevs_list": [ 00:19:59.589 { 00:19:59.589 "name": "spare", 00:19:59.589 "uuid": "75bd8757-5e8d-5c92-a20d-6e5157a6f46f", 00:19:59.589 "is_configured": true, 00:19:59.589 "data_offset": 0, 00:19:59.589 "data_size": 65536 00:19:59.589 }, 00:19:59.589 { 00:19:59.589 "name": "BaseBdev2", 00:19:59.589 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:19:59.589 "is_configured": true, 00:19:59.589 "data_offset": 0, 00:19:59.589 "data_size": 65536 00:19:59.589 } 00:19:59.589 ] 00:19:59.589 }' 00:19:59.589 05:00:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:59.589 05:00:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.849 05:00:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.108 05:00:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.108 "name": "raid_bdev1", 00:20:00.108 "uuid": "b0adea1c-e79a-4fa8-9ca1-29219f301f2e", 00:20:00.108 "strip_size_kb": 0, 00:20:00.108 "state": "online", 00:20:00.108 "raid_level": "raid1", 00:20:00.108 "superblock": false, 00:20:00.108 "num_base_bdevs": 2, 00:20:00.108 "num_base_bdevs_discovered": 2, 00:20:00.108 "num_base_bdevs_operational": 2, 00:20:00.108 "base_bdevs_list": [ 00:20:00.108 { 00:20:00.108 "name": "spare", 00:20:00.108 "uuid": "75bd8757-5e8d-5c92-a20d-6e5157a6f46f", 00:20:00.108 "is_configured": true, 00:20:00.108 "data_offset": 0, 00:20:00.108 "data_size": 65536 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "name": "BaseBdev2", 00:20:00.108 "uuid": "9bc6abec-9cdc-4e19-989d-523ad0f8cc6b", 00:20:00.108 "is_configured": true, 00:20:00.108 "data_offset": 0, 00:20:00.108 "data_size": 65536 00:20:00.108 } 00:20:00.108 ] 00:20:00.108 }' 00:20:00.108 05:00:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.108 05:00:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.367 05:00:23 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:00.367 [2024-11-18 05:00:23.842028] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.367 [2024-11-18 05:00:23.842286] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.367 00:20:00.367 Latency(us) 00:20:00.367 [2024-11-18T05:00:23.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.367 [2024-11-18T05:00:23.891Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:00.367 raid_bdev1 : 10.10 99.36 298.08 0.00 0.00 13435.79 247.62 109147.23 00:20:00.367 [2024-11-18T05:00:23.891Z] =================================================================================================================== 00:20:00.367 [2024-11-18T05:00:23.891Z] Total : 99.36 298.08 0.00 0.00 13435.79 247.62 109147.23 00:20:00.367 [2024-11-18 05:00:23.882684] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.367 0 00:20:00.367 [2024-11-18 05:00:23.882880] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.367 [2024-11-18 05:00:23.882975] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.367 [2024-11-18 05:00:23.882991] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:20:00.627 05:00:23 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.627 05:00:23 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:00.887 05:00:24 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:00.887 05:00:24 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:00.887 05:00:24 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@12 -- # local i 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:00.887 /dev/nbd0 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:00.887 05:00:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:00.887 05:00:24 -- common/autotest_common.sh@867 -- # local i 00:20:00.887 05:00:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:00.887 05:00:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:00.887 05:00:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:00.887 05:00:24 -- common/autotest_common.sh@871 -- # break 00:20:00.887 05:00:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:00.887 05:00:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:00.887 05:00:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:00.887 1+0 records in 00:20:00.887 1+0 records out 00:20:00.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361763 s, 11.3 MB/s 00:20:00.887 05:00:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:00.887 05:00:24 -- common/autotest_common.sh@884 -- # size=4096 00:20:00.887 05:00:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:00.887 05:00:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:00.887 05:00:24 -- common/autotest_common.sh@887 -- # return 0 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:00.887 05:00:24 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:00.887 05:00:24 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:00.887 05:00:24 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@12 -- # local i 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:00.887 05:00:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:01.147 /dev/nbd1 00:20:01.147 05:00:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:01.147 05:00:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:01.147 05:00:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:01.147 05:00:24 -- common/autotest_common.sh@867 -- # local i 00:20:01.147 05:00:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:01.147 05:00:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:01.147 05:00:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:01.147 05:00:24 -- common/autotest_common.sh@871 -- # break 00:20:01.147 05:00:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:01.147 05:00:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:01.147 05:00:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:01.147 1+0 records in 00:20:01.147 1+0 records out 00:20:01.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028849 s, 14.2 MB/s 00:20:01.147 05:00:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.147 05:00:24 -- common/autotest_common.sh@884 -- # size=4096 00:20:01.147 05:00:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.147 05:00:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:01.147 05:00:24 -- common/autotest_common.sh@887 -- # return 0 00:20:01.147 05:00:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:01.147 05:00:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:01.147 05:00:24 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:01.407 05:00:24 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:01.407 05:00:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:01.407 05:00:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:01.407 05:00:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:01.407 05:00:24 -- bdev/nbd_common.sh@51 -- # local i 00:20:01.407 05:00:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:01.407 05:00:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@41 -- # break 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@45 -- # return 0 00:20:01.666 05:00:25 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@51 -- # local i 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:01.666 05:00:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:01.925 05:00:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:01.925 05:00:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:01.925 05:00:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:01.925 05:00:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:01.925 05:00:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:01.925 05:00:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:01.925 05:00:25 -- bdev/nbd_common.sh@41 -- # break 00:20:01.925 05:00:25 -- bdev/nbd_common.sh@45 -- # return 0 00:20:01.925 05:00:25 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:01.925 05:00:25 -- bdev/bdev_raid.sh@709 -- # killprocess 79290 00:20:01.925 05:00:25 -- common/autotest_common.sh@936 -- # '[' -z 79290 ']' 00:20:01.925 05:00:25 -- common/autotest_common.sh@940 -- # kill -0 79290 00:20:01.925 05:00:25 -- common/autotest_common.sh@941 -- # uname 00:20:01.925 05:00:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:01.925 05:00:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79290 00:20:01.925 killing process with pid 79290 00:20:01.925 Received shutdown signal, test time was about 11.617242 seconds 00:20:01.925 00:20:01.925 Latency(us) 00:20:01.925 [2024-11-18T05:00:25.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.925 [2024-11-18T05:00:25.449Z] =================================================================================================================== 00:20:01.925 [2024-11-18T05:00:25.449Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.925 05:00:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:01.925 05:00:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:01.925 05:00:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79290' 00:20:01.925 05:00:25 -- common/autotest_common.sh@955 -- # kill 79290 00:20:01.925 [2024-11-18 05:00:25.378500] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.925 05:00:25 -- common/autotest_common.sh@960 -- # wait 79290 00:20:02.185 [2024-11-18 05:00:25.545546] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:03.122 00:20:03.122 real 0m16.199s 00:20:03.122 user 0m23.160s 00:20:03.122 sys 0m1.929s 00:20:03.122 ************************************ 00:20:03.122 END TEST raid_rebuild_test_io 00:20:03.122 ************************************ 00:20:03.122 05:00:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:03.122 05:00:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:20:03.122 05:00:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:03.122 05:00:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:03.122 05:00:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.122 ************************************ 00:20:03.122 START TEST raid_rebuild_test_sb_io 00:20:03.122 ************************************ 00:20:03.122 05:00:26 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true true 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@544 -- # raid_pid=79727 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@545 -- # waitforlisten 79727 /var/tmp/spdk-raid.sock 00:20:03.122 05:00:26 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:03.122 05:00:26 -- common/autotest_common.sh@829 -- # '[' -z 79727 ']' 00:20:03.122 05:00:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:03.122 05:00:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.122 05:00:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:03.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:03.122 05:00:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.122 05:00:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.381 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:03.381 Zero copy mechanism will not be used. 00:20:03.381 [2024-11-18 05:00:26.664687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:03.381 [2024-11-18 05:00:26.664853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79727 ] 00:20:03.381 [2024-11-18 05:00:26.820213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.641 [2024-11-18 05:00:26.975741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.641 [2024-11-18 05:00:27.148296] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.211 05:00:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.211 05:00:27 -- common/autotest_common.sh@862 -- # return 0 00:20:04.211 05:00:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:04.211 05:00:27 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:04.211 05:00:27 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:04.471 BaseBdev1_malloc 00:20:04.471 05:00:27 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:04.730 [2024-11-18 05:00:28.092786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:04.730 [2024-11-18 05:00:28.092905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.730 [2024-11-18 05:00:28.092958] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:20:04.730 [2024-11-18 05:00:28.092975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.730 [2024-11-18 05:00:28.095612] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.730 [2024-11-18 05:00:28.095685] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:04.730 BaseBdev1 00:20:04.730 05:00:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:04.730 05:00:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:04.730 05:00:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:04.989 BaseBdev2_malloc 00:20:04.989 05:00:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:05.249 [2024-11-18 05:00:28.518174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:05.249 [2024-11-18 05:00:28.518364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.249 [2024-11-18 05:00:28.518412] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:20:05.249 [2024-11-18 05:00:28.518433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.249 [2024-11-18 05:00:28.520643] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.249 [2024-11-18 05:00:28.520716] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:05.249 BaseBdev2 00:20:05.249 05:00:28 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:05.249 spare_malloc 00:20:05.249 05:00:28 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:05.508 spare_delay 00:20:05.508 05:00:28 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:05.768 [2024-11-18 05:00:29.175577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:05.768 [2024-11-18 05:00:29.175673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.768 [2024-11-18 05:00:29.175701] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:20:05.768 [2024-11-18 05:00:29.175716] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.768 [2024-11-18 05:00:29.177911] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.768 [2024-11-18 05:00:29.177984] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:05.768 spare 00:20:05.768 05:00:29 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:06.027 [2024-11-18 05:00:29.367717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:06.027 [2024-11-18 05:00:29.369637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:06.027 [2024-11-18 05:00:29.369858] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:20:06.027 [2024-11-18 05:00:29.369880] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:06.027 [2024-11-18 05:00:29.370048] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:20:06.027 [2024-11-18 05:00:29.370512] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:20:06.027 [2024-11-18 05:00:29.370541] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:20:06.027 [2024-11-18 05:00:29.370739] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.027 05:00:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.286 05:00:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:06.286 "name": "raid_bdev1", 00:20:06.286 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:06.286 "strip_size_kb": 0, 00:20:06.286 "state": "online", 00:20:06.286 "raid_level": "raid1", 00:20:06.286 "superblock": true, 00:20:06.286 "num_base_bdevs": 2, 00:20:06.286 "num_base_bdevs_discovered": 2, 00:20:06.286 "num_base_bdevs_operational": 2, 00:20:06.286 "base_bdevs_list": [ 00:20:06.286 { 00:20:06.286 "name": "BaseBdev1", 00:20:06.286 "uuid": "6c06ce65-6aca-50e6-82d6-8c6a2a015efe", 00:20:06.286 "is_configured": true, 00:20:06.286 "data_offset": 2048, 00:20:06.286 "data_size": 63488 00:20:06.286 }, 00:20:06.286 { 00:20:06.286 "name": "BaseBdev2", 00:20:06.286 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:06.286 "is_configured": true, 00:20:06.286 "data_offset": 2048, 00:20:06.286 "data_size": 63488 00:20:06.286 } 00:20:06.286 ] 00:20:06.286 }' 00:20:06.286 05:00:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:06.286 05:00:29 -- common/autotest_common.sh@10 -- # set +x 00:20:06.544 05:00:29 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:06.544 05:00:29 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:06.804 [2024-11-18 05:00:30.196266] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.804 05:00:30 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:06.804 05:00:30 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.804 05:00:30 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:07.063 05:00:30 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:07.063 05:00:30 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:07.063 05:00:30 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:07.063 05:00:30 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:07.322 [2024-11-18 05:00:30.650692] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:20:07.322 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:07.322 Zero copy mechanism will not be used. 00:20:07.322 Running I/O for 60 seconds... 00:20:07.322 [2024-11-18 05:00:30.714064] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:07.322 [2024-11-18 05:00:30.721048] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.322 05:00:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.600 05:00:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.600 "name": "raid_bdev1", 00:20:07.600 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:07.600 "strip_size_kb": 0, 00:20:07.600 "state": "online", 00:20:07.600 "raid_level": "raid1", 00:20:07.600 "superblock": true, 00:20:07.600 "num_base_bdevs": 2, 00:20:07.600 "num_base_bdevs_discovered": 1, 00:20:07.600 "num_base_bdevs_operational": 1, 00:20:07.600 "base_bdevs_list": [ 00:20:07.600 { 00:20:07.600 "name": null, 00:20:07.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.600 "is_configured": false, 00:20:07.600 "data_offset": 2048, 00:20:07.600 "data_size": 63488 00:20:07.600 }, 00:20:07.600 { 00:20:07.600 "name": "BaseBdev2", 00:20:07.600 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:07.600 "is_configured": true, 00:20:07.600 "data_offset": 2048, 00:20:07.600 "data_size": 63488 00:20:07.600 } 00:20:07.600 ] 00:20:07.600 }' 00:20:07.600 05:00:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.600 05:00:31 -- common/autotest_common.sh@10 -- # set +x 00:20:07.893 05:00:31 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:08.166 [2024-11-18 05:00:31.471204] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:08.166 [2024-11-18 05:00:31.471286] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.166 05:00:31 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:08.166 [2024-11-18 05:00:31.533577] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:08.166 [2024-11-18 05:00:31.535483] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.166 [2024-11-18 05:00:31.663711] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:08.166 [2024-11-18 05:00:31.664072] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:08.425 [2024-11-18 05:00:31.890850] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:08.425 [2024-11-18 05:00:31.891054] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:08.993 [2024-11-18 05:00:32.236790] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:08.993 [2024-11-18 05:00:32.372600] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:09.253 05:00:32 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.253 05:00:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:09.253 05:00:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:09.253 05:00:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:09.253 05:00:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:09.253 05:00:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.253 05:00:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.253 [2024-11-18 05:00:32.734141] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:09.253 05:00:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:09.253 "name": "raid_bdev1", 00:20:09.253 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:09.253 "strip_size_kb": 0, 00:20:09.253 "state": "online", 00:20:09.253 "raid_level": "raid1", 00:20:09.253 "superblock": true, 00:20:09.253 "num_base_bdevs": 2, 00:20:09.253 "num_base_bdevs_discovered": 2, 00:20:09.253 "num_base_bdevs_operational": 2, 00:20:09.253 "process": { 00:20:09.253 "type": "rebuild", 00:20:09.253 "target": "spare", 00:20:09.253 "progress": { 00:20:09.253 "blocks": 16384, 00:20:09.253 "percent": 25 00:20:09.253 } 00:20:09.253 }, 00:20:09.253 "base_bdevs_list": [ 00:20:09.253 { 00:20:09.253 "name": "spare", 00:20:09.253 "uuid": "b5813808-4944-5186-a6af-109db03198cc", 00:20:09.253 "is_configured": true, 00:20:09.253 "data_offset": 2048, 00:20:09.253 "data_size": 63488 00:20:09.253 }, 00:20:09.253 { 00:20:09.253 "name": "BaseBdev2", 00:20:09.253 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:09.253 "is_configured": true, 00:20:09.253 "data_offset": 2048, 00:20:09.253 "data_size": 63488 00:20:09.253 } 00:20:09.253 ] 00:20:09.253 }' 00:20:09.253 05:00:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:09.513 05:00:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.513 05:00:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:09.513 05:00:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.513 05:00:32 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:09.513 [2024-11-18 05:00:32.950289] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:09.513 [2024-11-18 05:00:32.950796] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:09.513 [2024-11-18 05:00:33.028469] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:09.772 [2024-11-18 05:00:33.059119] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:09.772 [2024-11-18 05:00:33.065961] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:09.772 [2024-11-18 05:00:33.173163] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:09.772 [2024-11-18 05:00:33.181176] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.772 [2024-11-18 05:00:33.213291] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.772 05:00:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.032 05:00:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.032 "name": "raid_bdev1", 00:20:10.032 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:10.032 "strip_size_kb": 0, 00:20:10.032 "state": "online", 00:20:10.032 "raid_level": "raid1", 00:20:10.032 "superblock": true, 00:20:10.032 "num_base_bdevs": 2, 00:20:10.032 "num_base_bdevs_discovered": 1, 00:20:10.032 "num_base_bdevs_operational": 1, 00:20:10.032 "base_bdevs_list": [ 00:20:10.032 { 00:20:10.032 "name": null, 00:20:10.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.032 "is_configured": false, 00:20:10.032 "data_offset": 2048, 00:20:10.032 "data_size": 63488 00:20:10.032 }, 00:20:10.032 { 00:20:10.032 "name": "BaseBdev2", 00:20:10.032 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:10.032 "is_configured": true, 00:20:10.032 "data_offset": 2048, 00:20:10.032 "data_size": 63488 00:20:10.032 } 00:20:10.032 ] 00:20:10.032 }' 00:20:10.032 05:00:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.032 05:00:33 -- common/autotest_common.sh@10 -- # set +x 00:20:10.600 05:00:33 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.600 05:00:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:10.600 05:00:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:10.600 05:00:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:10.600 05:00:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:10.600 05:00:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.600 05:00:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.600 05:00:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:10.600 "name": "raid_bdev1", 00:20:10.600 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:10.600 "strip_size_kb": 0, 00:20:10.600 "state": "online", 00:20:10.600 "raid_level": "raid1", 00:20:10.600 "superblock": true, 00:20:10.600 "num_base_bdevs": 2, 00:20:10.600 "num_base_bdevs_discovered": 1, 00:20:10.600 "num_base_bdevs_operational": 1, 00:20:10.600 "base_bdevs_list": [ 00:20:10.600 { 00:20:10.600 "name": null, 00:20:10.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.600 "is_configured": false, 00:20:10.600 "data_offset": 2048, 00:20:10.600 "data_size": 63488 00:20:10.600 }, 00:20:10.600 { 00:20:10.600 "name": "BaseBdev2", 00:20:10.600 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:10.600 "is_configured": true, 00:20:10.600 "data_offset": 2048, 00:20:10.600 "data_size": 63488 00:20:10.600 } 00:20:10.600 ] 00:20:10.600 }' 00:20:10.600 05:00:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:10.600 05:00:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:10.600 05:00:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:10.600 05:00:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:10.600 05:00:34 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:10.859 [2024-11-18 05:00:34.294843] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:10.859 [2024-11-18 05:00:34.294906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:10.859 05:00:34 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:10.859 [2024-11-18 05:00:34.348330] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:20:10.859 [2024-11-18 05:00:34.350401] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:11.118 [2024-11-18 05:00:34.459161] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:11.118 [2024-11-18 05:00:34.459607] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:11.378 [2024-11-18 05:00:34.718569] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:11.378 [2024-11-18 05:00:34.718906] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:11.947 [2024-11-18 05:00:35.191381] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:11.947 [2024-11-18 05:00:35.191714] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:11.947 05:00:35 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.947 05:00:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:11.947 05:00:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:11.947 05:00:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:11.947 05:00:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:11.947 05:00:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.947 05:00:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.206 [2024-11-18 05:00:35.519761] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:12.206 [2024-11-18 05:00:35.520131] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:12.206 05:00:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:12.206 "name": "raid_bdev1", 00:20:12.206 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:12.206 "strip_size_kb": 0, 00:20:12.206 "state": "online", 00:20:12.206 "raid_level": "raid1", 00:20:12.206 "superblock": true, 00:20:12.207 "num_base_bdevs": 2, 00:20:12.207 "num_base_bdevs_discovered": 2, 00:20:12.207 "num_base_bdevs_operational": 2, 00:20:12.207 "process": { 00:20:12.207 "type": "rebuild", 00:20:12.207 "target": "spare", 00:20:12.207 "progress": { 00:20:12.207 "blocks": 14336, 00:20:12.207 "percent": 22 00:20:12.207 } 00:20:12.207 }, 00:20:12.207 "base_bdevs_list": [ 00:20:12.207 { 00:20:12.207 "name": "spare", 00:20:12.207 "uuid": "b5813808-4944-5186-a6af-109db03198cc", 00:20:12.207 "is_configured": true, 00:20:12.207 "data_offset": 2048, 00:20:12.207 "data_size": 63488 00:20:12.207 }, 00:20:12.207 { 00:20:12.207 "name": "BaseBdev2", 00:20:12.207 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:12.207 "is_configured": true, 00:20:12.207 "data_offset": 2048, 00:20:12.207 "data_size": 63488 00:20:12.207 } 00:20:12.207 ] 00:20:12.207 }' 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:12.207 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@657 -- # local timeout=409 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.207 05:00:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.466 [2024-11-18 05:00:35.750571] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:12.466 05:00:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:12.466 "name": "raid_bdev1", 00:20:12.466 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:12.466 "strip_size_kb": 0, 00:20:12.466 "state": "online", 00:20:12.466 "raid_level": "raid1", 00:20:12.466 "superblock": true, 00:20:12.466 "num_base_bdevs": 2, 00:20:12.466 "num_base_bdevs_discovered": 2, 00:20:12.466 "num_base_bdevs_operational": 2, 00:20:12.466 "process": { 00:20:12.466 "type": "rebuild", 00:20:12.466 "target": "spare", 00:20:12.466 "progress": { 00:20:12.466 "blocks": 16384, 00:20:12.466 "percent": 25 00:20:12.466 } 00:20:12.466 }, 00:20:12.466 "base_bdevs_list": [ 00:20:12.466 { 00:20:12.466 "name": "spare", 00:20:12.466 "uuid": "b5813808-4944-5186-a6af-109db03198cc", 00:20:12.466 "is_configured": true, 00:20:12.466 "data_offset": 2048, 00:20:12.466 "data_size": 63488 00:20:12.466 }, 00:20:12.466 { 00:20:12.466 "name": "BaseBdev2", 00:20:12.466 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:12.466 "is_configured": true, 00:20:12.466 "data_offset": 2048, 00:20:12.466 "data_size": 63488 00:20:12.466 } 00:20:12.466 ] 00:20:12.466 }' 00:20:12.466 05:00:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:12.466 05:00:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.466 05:00:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:12.466 05:00:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.466 05:00:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:12.724 [2024-11-18 05:00:36.099196] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:12.982 [2024-11-18 05:00:36.345903] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:12.982 [2024-11-18 05:00:36.346169] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:13.549 [2024-11-18 05:00:36.802467] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:13.549 [2024-11-18 05:00:36.802769] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:13.549 05:00:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:13.549 05:00:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.549 05:00:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:13.549 05:00:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:13.549 05:00:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:13.549 05:00:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:13.549 05:00:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.549 05:00:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.808 05:00:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:13.808 "name": "raid_bdev1", 00:20:13.808 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:13.808 "strip_size_kb": 0, 00:20:13.808 "state": "online", 00:20:13.808 "raid_level": "raid1", 00:20:13.808 "superblock": true, 00:20:13.808 "num_base_bdevs": 2, 00:20:13.808 "num_base_bdevs_discovered": 2, 00:20:13.808 "num_base_bdevs_operational": 2, 00:20:13.808 "process": { 00:20:13.808 "type": "rebuild", 00:20:13.808 "target": "spare", 00:20:13.808 "progress": { 00:20:13.808 "blocks": 30720, 00:20:13.808 "percent": 48 00:20:13.808 } 00:20:13.808 }, 00:20:13.808 "base_bdevs_list": [ 00:20:13.808 { 00:20:13.808 "name": "spare", 00:20:13.808 "uuid": "b5813808-4944-5186-a6af-109db03198cc", 00:20:13.808 "is_configured": true, 00:20:13.808 "data_offset": 2048, 00:20:13.808 "data_size": 63488 00:20:13.808 }, 00:20:13.808 { 00:20:13.808 "name": "BaseBdev2", 00:20:13.808 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:13.808 "is_configured": true, 00:20:13.808 "data_offset": 2048, 00:20:13.808 "data_size": 63488 00:20:13.808 } 00:20:13.808 ] 00:20:13.808 }' 00:20:13.808 05:00:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:13.808 05:00:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.808 05:00:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:13.808 [2024-11-18 05:00:37.129237] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:13.808 05:00:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.808 05:00:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:14.067 [2024-11-18 05:00:37.331983] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:14.067 [2024-11-18 05:00:37.535851] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:14.353 [2024-11-18 05:00:37.644008] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.921 [2024-11-18 05:00:38.281956] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:14.921 [2024-11-18 05:00:38.282243] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:14.921 "name": "raid_bdev1", 00:20:14.921 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:14.921 "strip_size_kb": 0, 00:20:14.921 "state": "online", 00:20:14.921 "raid_level": "raid1", 00:20:14.921 "superblock": true, 00:20:14.921 "num_base_bdevs": 2, 00:20:14.921 "num_base_bdevs_discovered": 2, 00:20:14.921 "num_base_bdevs_operational": 2, 00:20:14.921 "process": { 00:20:14.921 "type": "rebuild", 00:20:14.921 "target": "spare", 00:20:14.921 "progress": { 00:20:14.921 "blocks": 53248, 00:20:14.921 "percent": 83 00:20:14.921 } 00:20:14.921 }, 00:20:14.921 "base_bdevs_list": [ 00:20:14.921 { 00:20:14.921 "name": "spare", 00:20:14.921 "uuid": "b5813808-4944-5186-a6af-109db03198cc", 00:20:14.921 "is_configured": true, 00:20:14.921 "data_offset": 2048, 00:20:14.921 "data_size": 63488 00:20:14.921 }, 00:20:14.921 { 00:20:14.921 "name": "BaseBdev2", 00:20:14.921 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:14.921 "is_configured": true, 00:20:14.921 "data_offset": 2048, 00:20:14.921 "data_size": 63488 00:20:14.921 } 00:20:14.921 ] 00:20:14.921 }' 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.921 05:00:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:15.490 [2024-11-18 05:00:38.943504] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:15.748 [2024-11-18 05:00:39.049646] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:15.748 [2024-11-18 05:00:39.051193] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.007 05:00:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:16.007 05:00:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.007 05:00:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.007 05:00:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:16.007 05:00:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:16.007 05:00:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.007 05:00:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.007 05:00:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.266 "name": "raid_bdev1", 00:20:16.266 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:16.266 "strip_size_kb": 0, 00:20:16.266 "state": "online", 00:20:16.266 "raid_level": "raid1", 00:20:16.266 "superblock": true, 00:20:16.266 "num_base_bdevs": 2, 00:20:16.266 "num_base_bdevs_discovered": 2, 00:20:16.266 "num_base_bdevs_operational": 2, 00:20:16.266 "base_bdevs_list": [ 00:20:16.266 { 00:20:16.266 "name": "spare", 00:20:16.266 "uuid": "b5813808-4944-5186-a6af-109db03198cc", 00:20:16.266 "is_configured": true, 00:20:16.266 "data_offset": 2048, 00:20:16.266 "data_size": 63488 00:20:16.266 }, 00:20:16.266 { 00:20:16.266 "name": "BaseBdev2", 00:20:16.266 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:16.266 "is_configured": true, 00:20:16.266 "data_offset": 2048, 00:20:16.266 "data_size": 63488 00:20:16.266 } 00:20:16.266 ] 00:20:16.266 }' 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@660 -- # break 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.266 05:00:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.526 "name": "raid_bdev1", 00:20:16.526 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:16.526 "strip_size_kb": 0, 00:20:16.526 "state": "online", 00:20:16.526 "raid_level": "raid1", 00:20:16.526 "superblock": true, 00:20:16.526 "num_base_bdevs": 2, 00:20:16.526 "num_base_bdevs_discovered": 2, 00:20:16.526 "num_base_bdevs_operational": 2, 00:20:16.526 "base_bdevs_list": [ 00:20:16.526 { 00:20:16.526 "name": "spare", 00:20:16.526 "uuid": "b5813808-4944-5186-a6af-109db03198cc", 00:20:16.526 "is_configured": true, 00:20:16.526 "data_offset": 2048, 00:20:16.526 "data_size": 63488 00:20:16.526 }, 00:20:16.526 { 00:20:16.526 "name": "BaseBdev2", 00:20:16.526 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:16.526 "is_configured": true, 00:20:16.526 "data_offset": 2048, 00:20:16.526 "data_size": 63488 00:20:16.526 } 00:20:16.526 ] 00:20:16.526 }' 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.526 05:00:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.785 05:00:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.785 "name": "raid_bdev1", 00:20:16.785 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:16.785 "strip_size_kb": 0, 00:20:16.785 "state": "online", 00:20:16.785 "raid_level": "raid1", 00:20:16.785 "superblock": true, 00:20:16.785 "num_base_bdevs": 2, 00:20:16.785 "num_base_bdevs_discovered": 2, 00:20:16.785 "num_base_bdevs_operational": 2, 00:20:16.785 "base_bdevs_list": [ 00:20:16.785 { 00:20:16.785 "name": "spare", 00:20:16.785 "uuid": "b5813808-4944-5186-a6af-109db03198cc", 00:20:16.785 "is_configured": true, 00:20:16.786 "data_offset": 2048, 00:20:16.786 "data_size": 63488 00:20:16.786 }, 00:20:16.786 { 00:20:16.786 "name": "BaseBdev2", 00:20:16.786 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:16.786 "is_configured": true, 00:20:16.786 "data_offset": 2048, 00:20:16.786 "data_size": 63488 00:20:16.786 } 00:20:16.786 ] 00:20:16.786 }' 00:20:16.786 05:00:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.786 05:00:40 -- common/autotest_common.sh@10 -- # set +x 00:20:17.045 05:00:40 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:17.304 [2024-11-18 05:00:40.731274] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.304 [2024-11-18 05:00:40.731360] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.304 00:20:17.304 Latency(us) 00:20:17.304 [2024-11-18T05:00:40.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.304 [2024-11-18T05:00:40.828Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:17.304 raid_bdev1 : 10.15 96.61 289.82 0.00 0.00 13555.64 251.35 112960.23 00:20:17.304 [2024-11-18T05:00:40.828Z] =================================================================================================================== 00:20:17.304 [2024-11-18T05:00:40.828Z] Total : 96.61 289.82 0.00 0.00 13555.64 251.35 112960.23 00:20:17.304 [2024-11-18 05:00:40.826437] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.304 [2024-11-18 05:00:40.826487] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.304 0 00:20:17.304 [2024-11-18 05:00:40.826605] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.304 [2024-11-18 05:00:40.826644] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:20:17.563 05:00:40 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.563 05:00:40 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:17.823 05:00:41 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:17.823 05:00:41 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:17.823 05:00:41 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:17.823 05:00:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:17.823 05:00:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:17.823 05:00:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:17.823 05:00:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:17.823 05:00:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:17.823 05:00:41 -- bdev/nbd_common.sh@12 -- # local i 00:20:17.823 05:00:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:17.823 05:00:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:17.823 05:00:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:18.082 /dev/nbd0 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:18.082 05:00:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:18.082 05:00:41 -- common/autotest_common.sh@867 -- # local i 00:20:18.082 05:00:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:18.082 05:00:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:18.082 05:00:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:18.082 05:00:41 -- common/autotest_common.sh@871 -- # break 00:20:18.082 05:00:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:18.082 05:00:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:18.082 05:00:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:18.082 1+0 records in 00:20:18.082 1+0 records out 00:20:18.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029606 s, 13.8 MB/s 00:20:18.082 05:00:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.082 05:00:41 -- common/autotest_common.sh@884 -- # size=4096 00:20:18.082 05:00:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.082 05:00:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:18.082 05:00:41 -- common/autotest_common.sh@887 -- # return 0 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.082 05:00:41 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:18.082 05:00:41 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:18.082 05:00:41 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@12 -- # local i 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.082 05:00:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:18.342 /dev/nbd1 00:20:18.342 05:00:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:18.342 05:00:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:18.342 05:00:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:18.342 05:00:41 -- common/autotest_common.sh@867 -- # local i 00:20:18.342 05:00:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:18.342 05:00:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:18.342 05:00:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:18.342 05:00:41 -- common/autotest_common.sh@871 -- # break 00:20:18.342 05:00:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:18.342 05:00:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:18.342 05:00:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:18.342 1+0 records in 00:20:18.342 1+0 records out 00:20:18.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335483 s, 12.2 MB/s 00:20:18.342 05:00:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.342 05:00:41 -- common/autotest_common.sh@884 -- # size=4096 00:20:18.342 05:00:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.342 05:00:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:18.342 05:00:41 -- common/autotest_common.sh@887 -- # return 0 00:20:18.342 05:00:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:18.342 05:00:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.342 05:00:41 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:18.342 05:00:41 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:18.342 05:00:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:18.342 05:00:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:18.342 05:00:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:18.342 05:00:41 -- bdev/nbd_common.sh@51 -- # local i 00:20:18.342 05:00:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:18.342 05:00:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:18.602 05:00:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:18.602 05:00:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:18.602 05:00:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:18.602 05:00:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:18.602 05:00:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:18.602 05:00:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:18.602 05:00:42 -- bdev/nbd_common.sh@41 -- # break 00:20:18.602 05:00:42 -- bdev/nbd_common.sh@45 -- # return 0 00:20:18.602 05:00:42 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:18.602 05:00:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:18.602 05:00:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:18.603 05:00:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:18.603 05:00:42 -- bdev/nbd_common.sh@51 -- # local i 00:20:18.603 05:00:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:18.603 05:00:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:18.862 05:00:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:18.862 05:00:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:18.862 05:00:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:18.862 05:00:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:18.862 05:00:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:18.862 05:00:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:18.862 05:00:42 -- bdev/nbd_common.sh@41 -- # break 00:20:18.862 05:00:42 -- bdev/nbd_common.sh@45 -- # return 0 00:20:18.862 05:00:42 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:18.862 05:00:42 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:18.862 05:00:42 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:18.862 05:00:42 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:19.122 05:00:42 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:19.382 [2024-11-18 05:00:42.850739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:19.382 [2024-11-18 05:00:42.850872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.382 [2024-11-18 05:00:42.850909] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:20:19.382 [2024-11-18 05:00:42.850923] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.382 [2024-11-18 05:00:42.853533] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.382 [2024-11-18 05:00:42.853600] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:19.382 [2024-11-18 05:00:42.853722] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:19.382 [2024-11-18 05:00:42.853775] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:19.382 BaseBdev1 00:20:19.382 05:00:42 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:19.382 05:00:42 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:19.382 05:00:42 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:19.641 05:00:43 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:19.901 [2024-11-18 05:00:43.238950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:19.901 [2024-11-18 05:00:43.239052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.901 [2024-11-18 05:00:43.239085] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:20:19.901 [2024-11-18 05:00:43.239098] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.901 [2024-11-18 05:00:43.239622] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.901 [2024-11-18 05:00:43.239656] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:19.901 [2024-11-18 05:00:43.239777] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:19.901 [2024-11-18 05:00:43.239794] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:19.901 [2024-11-18 05:00:43.239822] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:19.901 [2024-11-18 05:00:43.239847] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state configuring 00:20:19.901 [2024-11-18 05:00:43.239917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:19.901 BaseBdev2 00:20:19.901 05:00:43 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:20.161 [2024-11-18 05:00:43.611050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:20.161 [2024-11-18 05:00:43.611155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.161 [2024-11-18 05:00:43.611186] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:20:20.161 [2024-11-18 05:00:43.611217] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.161 [2024-11-18 05:00:43.611734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.161 [2024-11-18 05:00:43.611791] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:20.161 [2024-11-18 05:00:43.611913] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:20.161 [2024-11-18 05:00:43.611947] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:20.161 spare 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.161 05:00:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.421 [2024-11-18 05:00:43.712059] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:20:20.421 [2024-11-18 05:00:43.712113] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:20.421 [2024-11-18 05:00:43.712267] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a7e0 00:20:20.421 [2024-11-18 05:00:43.712689] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:20:20.421 [2024-11-18 05:00:43.712715] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:20:20.421 [2024-11-18 05:00:43.712928] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.421 05:00:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.421 "name": "raid_bdev1", 00:20:20.421 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:20.421 "strip_size_kb": 0, 00:20:20.421 "state": "online", 00:20:20.421 "raid_level": "raid1", 00:20:20.421 "superblock": true, 00:20:20.421 "num_base_bdevs": 2, 00:20:20.421 "num_base_bdevs_discovered": 2, 00:20:20.421 "num_base_bdevs_operational": 2, 00:20:20.421 "base_bdevs_list": [ 00:20:20.421 { 00:20:20.421 "name": "spare", 00:20:20.421 "uuid": "b5813808-4944-5186-a6af-109db03198cc", 00:20:20.421 "is_configured": true, 00:20:20.421 "data_offset": 2048, 00:20:20.421 "data_size": 63488 00:20:20.421 }, 00:20:20.421 { 00:20:20.421 "name": "BaseBdev2", 00:20:20.421 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:20.421 "is_configured": true, 00:20:20.421 "data_offset": 2048, 00:20:20.421 "data_size": 63488 00:20:20.421 } 00:20:20.421 ] 00:20:20.421 }' 00:20:20.421 05:00:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.421 05:00:43 -- common/autotest_common.sh@10 -- # set +x 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:20.989 "name": "raid_bdev1", 00:20:20.989 "uuid": "7e108873-ebac-43c3-8c0f-0138305b3751", 00:20:20.989 "strip_size_kb": 0, 00:20:20.989 "state": "online", 00:20:20.989 "raid_level": "raid1", 00:20:20.989 "superblock": true, 00:20:20.989 "num_base_bdevs": 2, 00:20:20.989 "num_base_bdevs_discovered": 2, 00:20:20.989 "num_base_bdevs_operational": 2, 00:20:20.989 "base_bdevs_list": [ 00:20:20.989 { 00:20:20.989 "name": "spare", 00:20:20.989 "uuid": "b5813808-4944-5186-a6af-109db03198cc", 00:20:20.989 "is_configured": true, 00:20:20.989 "data_offset": 2048, 00:20:20.989 "data_size": 63488 00:20:20.989 }, 00:20:20.989 { 00:20:20.989 "name": "BaseBdev2", 00:20:20.989 "uuid": "a8dd4805-6ced-5f50-aab7-5a018e183de5", 00:20:20.989 "is_configured": true, 00:20:20.989 "data_offset": 2048, 00:20:20.989 "data_size": 63488 00:20:20.989 } 00:20:20.989 ] 00:20:20.989 }' 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.989 05:00:44 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:21.249 05:00:44 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.249 05:00:44 -- bdev/bdev_raid.sh@709 -- # killprocess 79727 00:20:21.249 05:00:44 -- common/autotest_common.sh@936 -- # '[' -z 79727 ']' 00:20:21.249 05:00:44 -- common/autotest_common.sh@940 -- # kill -0 79727 00:20:21.249 05:00:44 -- common/autotest_common.sh@941 -- # uname 00:20:21.249 05:00:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:21.249 05:00:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79727 00:20:21.249 killing process with pid 79727 00:20:21.249 Received shutdown signal, test time was about 13.990471 seconds 00:20:21.249 00:20:21.249 Latency(us) 00:20:21.249 [2024-11-18T05:00:44.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.249 [2024-11-18T05:00:44.773Z] =================================================================================================================== 00:20:21.249 [2024-11-18T05:00:44.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.249 05:00:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:21.249 05:00:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:21.249 05:00:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79727' 00:20:21.249 05:00:44 -- common/autotest_common.sh@955 -- # kill 79727 00:20:21.249 [2024-11-18 05:00:44.643017] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:21.249 05:00:44 -- common/autotest_common.sh@960 -- # wait 79727 00:20:21.249 [2024-11-18 05:00:44.643102] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.249 [2024-11-18 05:00:44.643168] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.249 [2024-11-18 05:00:44.643183] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:20:21.508 [2024-11-18 05:00:44.795857] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:22.447 00:20:22.447 real 0m19.159s 00:20:22.447 user 0m28.703s 00:20:22.447 sys 0m2.323s 00:20:22.447 05:00:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:22.447 05:00:45 -- common/autotest_common.sh@10 -- # set +x 00:20:22.447 ************************************ 00:20:22.447 END TEST raid_rebuild_test_sb_io 00:20:22.447 ************************************ 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:20:22.447 05:00:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:22.447 05:00:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:22.447 05:00:45 -- common/autotest_common.sh@10 -- # set +x 00:20:22.447 ************************************ 00:20:22.447 START TEST raid_rebuild_test 00:20:22.447 ************************************ 00:20:22.447 05:00:45 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false false 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=80237 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 80237 /var/tmp/spdk-raid.sock 00:20:22.447 05:00:45 -- common/autotest_common.sh@829 -- # '[' -z 80237 ']' 00:20:22.447 05:00:45 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:22.447 05:00:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:22.447 05:00:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.447 05:00:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:22.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:22.447 05:00:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.447 05:00:45 -- common/autotest_common.sh@10 -- # set +x 00:20:22.447 [2024-11-18 05:00:45.893604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:22.447 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:22.447 Zero copy mechanism will not be used. 00:20:22.447 [2024-11-18 05:00:45.893788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80237 ] 00:20:22.707 [2024-11-18 05:00:46.064645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.707 [2024-11-18 05:00:46.221601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.008 [2024-11-18 05:00:46.369037] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.576 05:00:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.576 05:00:46 -- common/autotest_common.sh@862 -- # return 0 00:20:23.576 05:00:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:23.576 05:00:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:23.576 05:00:46 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:23.576 BaseBdev1 00:20:23.576 05:00:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:23.576 05:00:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:23.576 05:00:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:23.835 BaseBdev2 00:20:23.835 05:00:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:23.835 05:00:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:23.835 05:00:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:24.093 BaseBdev3 00:20:24.093 05:00:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:24.093 05:00:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:24.093 05:00:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:24.353 BaseBdev4 00:20:24.353 05:00:47 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:24.612 spare_malloc 00:20:24.612 05:00:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:24.871 spare_delay 00:20:24.871 05:00:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:24.871 [2024-11-18 05:00:48.367465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.871 [2024-11-18 05:00:48.367569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.871 [2024-11-18 05:00:48.367600] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:20:24.871 [2024-11-18 05:00:48.367631] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.871 [2024-11-18 05:00:48.370111] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.871 [2024-11-18 05:00:48.370165] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.871 spare 00:20:24.872 05:00:48 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:25.131 [2024-11-18 05:00:48.651640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.388 [2024-11-18 05:00:48.653737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:25.388 [2024-11-18 05:00:48.653809] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:25.388 [2024-11-18 05:00:48.653859] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:25.388 [2024-11-18 05:00:48.653937] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:20:25.388 [2024-11-18 05:00:48.653953] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:25.388 [2024-11-18 05:00:48.654151] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:25.388 [2024-11-18 05:00:48.654605] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:20:25.388 [2024-11-18 05:00:48.654632] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:20:25.388 [2024-11-18 05:00:48.654839] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.388 "name": "raid_bdev1", 00:20:25.388 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:25.388 "strip_size_kb": 0, 00:20:25.388 "state": "online", 00:20:25.388 "raid_level": "raid1", 00:20:25.388 "superblock": false, 00:20:25.388 "num_base_bdevs": 4, 00:20:25.388 "num_base_bdevs_discovered": 4, 00:20:25.388 "num_base_bdevs_operational": 4, 00:20:25.388 "base_bdevs_list": [ 00:20:25.388 { 00:20:25.388 "name": "BaseBdev1", 00:20:25.388 "uuid": "16f3a47c-be75-4fd2-a87e-32b81a951ac3", 00:20:25.388 "is_configured": true, 00:20:25.388 "data_offset": 0, 00:20:25.388 "data_size": 65536 00:20:25.388 }, 00:20:25.388 { 00:20:25.388 "name": "BaseBdev2", 00:20:25.388 "uuid": "5ba6784c-8daf-4e73-872b-2f7280971e5c", 00:20:25.388 "is_configured": true, 00:20:25.388 "data_offset": 0, 00:20:25.388 "data_size": 65536 00:20:25.388 }, 00:20:25.388 { 00:20:25.388 "name": "BaseBdev3", 00:20:25.388 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:25.388 "is_configured": true, 00:20:25.388 "data_offset": 0, 00:20:25.388 "data_size": 65536 00:20:25.388 }, 00:20:25.388 { 00:20:25.388 "name": "BaseBdev4", 00:20:25.388 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:25.388 "is_configured": true, 00:20:25.388 "data_offset": 0, 00:20:25.388 "data_size": 65536 00:20:25.388 } 00:20:25.388 ] 00:20:25.388 }' 00:20:25.388 05:00:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.388 05:00:48 -- common/autotest_common.sh@10 -- # set +x 00:20:25.647 05:00:49 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:25.647 05:00:49 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:25.905 [2024-11-18 05:00:49.328075] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.905 05:00:49 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:25.905 05:00:49 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.905 05:00:49 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:26.165 05:00:49 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:26.165 05:00:49 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:26.165 05:00:49 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:26.165 05:00:49 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:26.165 05:00:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:26.165 05:00:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:26.165 05:00:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:26.165 05:00:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:26.165 05:00:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:26.165 05:00:49 -- bdev/nbd_common.sh@12 -- # local i 00:20:26.165 05:00:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:26.165 05:00:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:26.165 05:00:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:26.424 [2024-11-18 05:00:49.832039] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:20:26.424 /dev/nbd0 00:20:26.424 05:00:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:26.424 05:00:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:26.424 05:00:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:26.424 05:00:49 -- common/autotest_common.sh@867 -- # local i 00:20:26.424 05:00:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:26.424 05:00:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:26.424 05:00:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:26.424 05:00:49 -- common/autotest_common.sh@871 -- # break 00:20:26.424 05:00:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:26.424 05:00:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:26.424 05:00:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.424 1+0 records in 00:20:26.424 1+0 records out 00:20:26.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215688 s, 19.0 MB/s 00:20:26.424 05:00:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.424 05:00:49 -- common/autotest_common.sh@884 -- # size=4096 00:20:26.424 05:00:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.424 05:00:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:26.424 05:00:49 -- common/autotest_common.sh@887 -- # return 0 00:20:26.424 05:00:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.424 05:00:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:26.424 05:00:49 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:26.424 05:00:49 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:26.424 05:00:49 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:32.988 65536+0 records in 00:20:32.988 65536+0 records out 00:20:32.989 33554432 bytes (34 MB, 32 MiB) copied, 5.68455 s, 5.9 MB/s 00:20:32.989 05:00:55 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@51 -- # local i 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:32.989 [2024-11-18 05:00:55.795551] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@41 -- # break 00:20:32.989 05:00:55 -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.989 05:00:55 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:32.989 [2024-11-18 05:00:56.039638] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:32.989 "name": "raid_bdev1", 00:20:32.989 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:32.989 "strip_size_kb": 0, 00:20:32.989 "state": "online", 00:20:32.989 "raid_level": "raid1", 00:20:32.989 "superblock": false, 00:20:32.989 "num_base_bdevs": 4, 00:20:32.989 "num_base_bdevs_discovered": 3, 00:20:32.989 "num_base_bdevs_operational": 3, 00:20:32.989 "base_bdevs_list": [ 00:20:32.989 { 00:20:32.989 "name": null, 00:20:32.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.989 "is_configured": false, 00:20:32.989 "data_offset": 0, 00:20:32.989 "data_size": 65536 00:20:32.989 }, 00:20:32.989 { 00:20:32.989 "name": "BaseBdev2", 00:20:32.989 "uuid": "5ba6784c-8daf-4e73-872b-2f7280971e5c", 00:20:32.989 "is_configured": true, 00:20:32.989 "data_offset": 0, 00:20:32.989 "data_size": 65536 00:20:32.989 }, 00:20:32.989 { 00:20:32.989 "name": "BaseBdev3", 00:20:32.989 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:32.989 "is_configured": true, 00:20:32.989 "data_offset": 0, 00:20:32.989 "data_size": 65536 00:20:32.989 }, 00:20:32.989 { 00:20:32.989 "name": "BaseBdev4", 00:20:32.989 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:32.989 "is_configured": true, 00:20:32.989 "data_offset": 0, 00:20:32.989 "data_size": 65536 00:20:32.989 } 00:20:32.989 ] 00:20:32.989 }' 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:32.989 05:00:56 -- common/autotest_common.sh@10 -- # set +x 00:20:32.989 05:00:56 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:33.248 [2024-11-18 05:00:56.663826] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:33.248 [2024-11-18 05:00:56.663878] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:33.248 [2024-11-18 05:00:56.674139] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09620 00:20:33.248 [2024-11-18 05:00:56.675992] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:33.248 05:00:56 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:34.183 05:00:57 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.183 05:00:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:34.183 05:00:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:34.183 05:00:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:34.183 05:00:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:34.183 05:00:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.183 05:00:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.443 05:00:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:34.443 "name": "raid_bdev1", 00:20:34.443 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:34.443 "strip_size_kb": 0, 00:20:34.443 "state": "online", 00:20:34.443 "raid_level": "raid1", 00:20:34.443 "superblock": false, 00:20:34.443 "num_base_bdevs": 4, 00:20:34.443 "num_base_bdevs_discovered": 4, 00:20:34.443 "num_base_bdevs_operational": 4, 00:20:34.443 "process": { 00:20:34.443 "type": "rebuild", 00:20:34.443 "target": "spare", 00:20:34.443 "progress": { 00:20:34.443 "blocks": 24576, 00:20:34.443 "percent": 37 00:20:34.443 } 00:20:34.443 }, 00:20:34.443 "base_bdevs_list": [ 00:20:34.443 { 00:20:34.443 "name": "spare", 00:20:34.443 "uuid": "1510b900-0653-5513-9e3f-30f13b354dc4", 00:20:34.443 "is_configured": true, 00:20:34.443 "data_offset": 0, 00:20:34.443 "data_size": 65536 00:20:34.443 }, 00:20:34.443 { 00:20:34.443 "name": "BaseBdev2", 00:20:34.443 "uuid": "5ba6784c-8daf-4e73-872b-2f7280971e5c", 00:20:34.443 "is_configured": true, 00:20:34.443 "data_offset": 0, 00:20:34.443 "data_size": 65536 00:20:34.443 }, 00:20:34.443 { 00:20:34.443 "name": "BaseBdev3", 00:20:34.443 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:34.443 "is_configured": true, 00:20:34.443 "data_offset": 0, 00:20:34.443 "data_size": 65536 00:20:34.443 }, 00:20:34.443 { 00:20:34.443 "name": "BaseBdev4", 00:20:34.443 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:34.443 "is_configured": true, 00:20:34.443 "data_offset": 0, 00:20:34.443 "data_size": 65536 00:20:34.443 } 00:20:34.443 ] 00:20:34.443 }' 00:20:34.443 05:00:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:34.443 05:00:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.443 05:00:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:34.443 05:00:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.443 05:00:57 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:34.701 [2024-11-18 05:00:58.162326] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:34.701 [2024-11-18 05:00:58.182147] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:34.701 [2024-11-18 05:00:58.182273] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.701 05:00:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.960 05:00:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:34.960 "name": "raid_bdev1", 00:20:34.960 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:34.960 "strip_size_kb": 0, 00:20:34.960 "state": "online", 00:20:34.960 "raid_level": "raid1", 00:20:34.960 "superblock": false, 00:20:34.960 "num_base_bdevs": 4, 00:20:34.960 "num_base_bdevs_discovered": 3, 00:20:34.960 "num_base_bdevs_operational": 3, 00:20:34.960 "base_bdevs_list": [ 00:20:34.960 { 00:20:34.960 "name": null, 00:20:34.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.960 "is_configured": false, 00:20:34.960 "data_offset": 0, 00:20:34.960 "data_size": 65536 00:20:34.960 }, 00:20:34.960 { 00:20:34.960 "name": "BaseBdev2", 00:20:34.960 "uuid": "5ba6784c-8daf-4e73-872b-2f7280971e5c", 00:20:34.960 "is_configured": true, 00:20:34.960 "data_offset": 0, 00:20:34.960 "data_size": 65536 00:20:34.960 }, 00:20:34.960 { 00:20:34.960 "name": "BaseBdev3", 00:20:34.960 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:34.960 "is_configured": true, 00:20:34.960 "data_offset": 0, 00:20:34.960 "data_size": 65536 00:20:34.960 }, 00:20:34.960 { 00:20:34.960 "name": "BaseBdev4", 00:20:34.960 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:34.960 "is_configured": true, 00:20:34.960 "data_offset": 0, 00:20:34.960 "data_size": 65536 00:20:34.960 } 00:20:34.960 ] 00:20:34.960 }' 00:20:34.960 05:00:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:34.960 05:00:58 -- common/autotest_common.sh@10 -- # set +x 00:20:35.219 05:00:58 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.219 05:00:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:35.219 05:00:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:35.219 05:00:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:35.219 05:00:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:35.219 05:00:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.219 05:00:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.479 05:00:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:35.479 "name": "raid_bdev1", 00:20:35.479 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:35.479 "strip_size_kb": 0, 00:20:35.479 "state": "online", 00:20:35.479 "raid_level": "raid1", 00:20:35.479 "superblock": false, 00:20:35.479 "num_base_bdevs": 4, 00:20:35.479 "num_base_bdevs_discovered": 3, 00:20:35.479 "num_base_bdevs_operational": 3, 00:20:35.479 "base_bdevs_list": [ 00:20:35.479 { 00:20:35.479 "name": null, 00:20:35.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.479 "is_configured": false, 00:20:35.479 "data_offset": 0, 00:20:35.479 "data_size": 65536 00:20:35.479 }, 00:20:35.479 { 00:20:35.479 "name": "BaseBdev2", 00:20:35.479 "uuid": "5ba6784c-8daf-4e73-872b-2f7280971e5c", 00:20:35.479 "is_configured": true, 00:20:35.479 "data_offset": 0, 00:20:35.479 "data_size": 65536 00:20:35.479 }, 00:20:35.479 { 00:20:35.479 "name": "BaseBdev3", 00:20:35.479 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:35.479 "is_configured": true, 00:20:35.479 "data_offset": 0, 00:20:35.479 "data_size": 65536 00:20:35.479 }, 00:20:35.479 { 00:20:35.479 "name": "BaseBdev4", 00:20:35.479 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:35.479 "is_configured": true, 00:20:35.479 "data_offset": 0, 00:20:35.479 "data_size": 65536 00:20:35.479 } 00:20:35.479 ] 00:20:35.479 }' 00:20:35.479 05:00:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:35.479 05:00:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:35.479 05:00:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:35.479 05:00:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:35.479 05:00:58 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:35.738 [2024-11-18 05:00:59.137108] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:35.738 [2024-11-18 05:00:59.137150] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.738 [2024-11-18 05:00:59.146734] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d096f0 00:20:35.738 [2024-11-18 05:00:59.148611] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:35.738 05:00:59 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:36.675 05:01:00 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.675 05:01:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:36.675 05:01:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:36.675 05:01:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:36.675 05:01:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:36.675 05:01:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.675 05:01:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.934 05:01:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:36.934 "name": "raid_bdev1", 00:20:36.934 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:36.934 "strip_size_kb": 0, 00:20:36.934 "state": "online", 00:20:36.934 "raid_level": "raid1", 00:20:36.934 "superblock": false, 00:20:36.934 "num_base_bdevs": 4, 00:20:36.934 "num_base_bdevs_discovered": 4, 00:20:36.934 "num_base_bdevs_operational": 4, 00:20:36.934 "process": { 00:20:36.934 "type": "rebuild", 00:20:36.934 "target": "spare", 00:20:36.934 "progress": { 00:20:36.934 "blocks": 24576, 00:20:36.934 "percent": 37 00:20:36.934 } 00:20:36.934 }, 00:20:36.934 "base_bdevs_list": [ 00:20:36.934 { 00:20:36.934 "name": "spare", 00:20:36.934 "uuid": "1510b900-0653-5513-9e3f-30f13b354dc4", 00:20:36.934 "is_configured": true, 00:20:36.934 "data_offset": 0, 00:20:36.934 "data_size": 65536 00:20:36.934 }, 00:20:36.934 { 00:20:36.934 "name": "BaseBdev2", 00:20:36.934 "uuid": "5ba6784c-8daf-4e73-872b-2f7280971e5c", 00:20:36.934 "is_configured": true, 00:20:36.934 "data_offset": 0, 00:20:36.934 "data_size": 65536 00:20:36.934 }, 00:20:36.934 { 00:20:36.934 "name": "BaseBdev3", 00:20:36.934 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:36.934 "is_configured": true, 00:20:36.934 "data_offset": 0, 00:20:36.934 "data_size": 65536 00:20:36.934 }, 00:20:36.934 { 00:20:36.934 "name": "BaseBdev4", 00:20:36.934 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:36.934 "is_configured": true, 00:20:36.935 "data_offset": 0, 00:20:36.935 "data_size": 65536 00:20:36.935 } 00:20:36.935 ] 00:20:36.935 }' 00:20:36.935 05:01:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:36.935 05:01:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.935 05:01:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:36.935 05:01:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.935 05:01:00 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:36.935 05:01:00 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:36.935 05:01:00 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:36.935 05:01:00 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:36.935 05:01:00 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:37.194 [2024-11-18 05:01:00.615024] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:37.194 [2024-11-18 05:01:00.654834] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000d096f0 00:20:37.194 05:01:00 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:37.194 05:01:00 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:37.194 05:01:00 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.194 05:01:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:37.194 05:01:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:37.194 05:01:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:37.194 05:01:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:37.194 05:01:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.194 05:01:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:37.455 "name": "raid_bdev1", 00:20:37.455 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:37.455 "strip_size_kb": 0, 00:20:37.455 "state": "online", 00:20:37.455 "raid_level": "raid1", 00:20:37.455 "superblock": false, 00:20:37.455 "num_base_bdevs": 4, 00:20:37.455 "num_base_bdevs_discovered": 3, 00:20:37.455 "num_base_bdevs_operational": 3, 00:20:37.455 "process": { 00:20:37.455 "type": "rebuild", 00:20:37.455 "target": "spare", 00:20:37.455 "progress": { 00:20:37.455 "blocks": 34816, 00:20:37.455 "percent": 53 00:20:37.455 } 00:20:37.455 }, 00:20:37.455 "base_bdevs_list": [ 00:20:37.455 { 00:20:37.455 "name": "spare", 00:20:37.455 "uuid": "1510b900-0653-5513-9e3f-30f13b354dc4", 00:20:37.455 "is_configured": true, 00:20:37.455 "data_offset": 0, 00:20:37.455 "data_size": 65536 00:20:37.455 }, 00:20:37.455 { 00:20:37.455 "name": null, 00:20:37.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.455 "is_configured": false, 00:20:37.455 "data_offset": 0, 00:20:37.455 "data_size": 65536 00:20:37.455 }, 00:20:37.455 { 00:20:37.455 "name": "BaseBdev3", 00:20:37.455 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:37.455 "is_configured": true, 00:20:37.455 "data_offset": 0, 00:20:37.455 "data_size": 65536 00:20:37.455 }, 00:20:37.455 { 00:20:37.455 "name": "BaseBdev4", 00:20:37.455 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:37.455 "is_configured": true, 00:20:37.455 "data_offset": 0, 00:20:37.455 "data_size": 65536 00:20:37.455 } 00:20:37.455 ] 00:20:37.455 }' 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@657 -- # local timeout=434 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.455 05:01:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.714 05:01:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:37.714 "name": "raid_bdev1", 00:20:37.714 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:37.714 "strip_size_kb": 0, 00:20:37.714 "state": "online", 00:20:37.714 "raid_level": "raid1", 00:20:37.714 "superblock": false, 00:20:37.714 "num_base_bdevs": 4, 00:20:37.714 "num_base_bdevs_discovered": 3, 00:20:37.714 "num_base_bdevs_operational": 3, 00:20:37.714 "process": { 00:20:37.714 "type": "rebuild", 00:20:37.714 "target": "spare", 00:20:37.714 "progress": { 00:20:37.714 "blocks": 38912, 00:20:37.714 "percent": 59 00:20:37.714 } 00:20:37.714 }, 00:20:37.714 "base_bdevs_list": [ 00:20:37.714 { 00:20:37.714 "name": "spare", 00:20:37.714 "uuid": "1510b900-0653-5513-9e3f-30f13b354dc4", 00:20:37.714 "is_configured": true, 00:20:37.714 "data_offset": 0, 00:20:37.714 "data_size": 65536 00:20:37.714 }, 00:20:37.714 { 00:20:37.714 "name": null, 00:20:37.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.714 "is_configured": false, 00:20:37.714 "data_offset": 0, 00:20:37.714 "data_size": 65536 00:20:37.714 }, 00:20:37.714 { 00:20:37.714 "name": "BaseBdev3", 00:20:37.714 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:37.714 "is_configured": true, 00:20:37.714 "data_offset": 0, 00:20:37.714 "data_size": 65536 00:20:37.714 }, 00:20:37.714 { 00:20:37.714 "name": "BaseBdev4", 00:20:37.714 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:37.714 "is_configured": true, 00:20:37.714 "data_offset": 0, 00:20:37.714 "data_size": 65536 00:20:37.714 } 00:20:37.714 ] 00:20:37.714 }' 00:20:37.714 05:01:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:37.714 05:01:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.714 05:01:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:37.714 05:01:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.714 05:01:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.093 [2024-11-18 05:01:02.362138] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:39.093 [2024-11-18 05:01:02.362264] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:39.093 [2024-11-18 05:01:02.362321] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:39.093 "name": "raid_bdev1", 00:20:39.093 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:39.093 "strip_size_kb": 0, 00:20:39.093 "state": "online", 00:20:39.093 "raid_level": "raid1", 00:20:39.093 "superblock": false, 00:20:39.093 "num_base_bdevs": 4, 00:20:39.093 "num_base_bdevs_discovered": 3, 00:20:39.093 "num_base_bdevs_operational": 3, 00:20:39.093 "base_bdevs_list": [ 00:20:39.093 { 00:20:39.093 "name": "spare", 00:20:39.093 "uuid": "1510b900-0653-5513-9e3f-30f13b354dc4", 00:20:39.093 "is_configured": true, 00:20:39.093 "data_offset": 0, 00:20:39.093 "data_size": 65536 00:20:39.093 }, 00:20:39.093 { 00:20:39.093 "name": null, 00:20:39.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.093 "is_configured": false, 00:20:39.093 "data_offset": 0, 00:20:39.093 "data_size": 65536 00:20:39.093 }, 00:20:39.093 { 00:20:39.093 "name": "BaseBdev3", 00:20:39.093 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:39.093 "is_configured": true, 00:20:39.093 "data_offset": 0, 00:20:39.093 "data_size": 65536 00:20:39.093 }, 00:20:39.093 { 00:20:39.093 "name": "BaseBdev4", 00:20:39.093 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:39.093 "is_configured": true, 00:20:39.093 "data_offset": 0, 00:20:39.093 "data_size": 65536 00:20:39.093 } 00:20:39.093 ] 00:20:39.093 }' 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@660 -- # break 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.093 05:01:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.352 05:01:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:39.352 "name": "raid_bdev1", 00:20:39.352 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:39.352 "strip_size_kb": 0, 00:20:39.352 "state": "online", 00:20:39.352 "raid_level": "raid1", 00:20:39.352 "superblock": false, 00:20:39.352 "num_base_bdevs": 4, 00:20:39.352 "num_base_bdevs_discovered": 3, 00:20:39.352 "num_base_bdevs_operational": 3, 00:20:39.352 "base_bdevs_list": [ 00:20:39.352 { 00:20:39.352 "name": "spare", 00:20:39.352 "uuid": "1510b900-0653-5513-9e3f-30f13b354dc4", 00:20:39.352 "is_configured": true, 00:20:39.352 "data_offset": 0, 00:20:39.352 "data_size": 65536 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "name": null, 00:20:39.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.353 "is_configured": false, 00:20:39.353 "data_offset": 0, 00:20:39.353 "data_size": 65536 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "name": "BaseBdev3", 00:20:39.353 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:39.353 "is_configured": true, 00:20:39.353 "data_offset": 0, 00:20:39.353 "data_size": 65536 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "name": "BaseBdev4", 00:20:39.353 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:39.353 "is_configured": true, 00:20:39.353 "data_offset": 0, 00:20:39.353 "data_size": 65536 00:20:39.353 } 00:20:39.353 ] 00:20:39.353 }' 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.353 05:01:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.612 05:01:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:39.612 "name": "raid_bdev1", 00:20:39.612 "uuid": "b1d63c29-9819-44d0-a68b-03ad584ccda2", 00:20:39.612 "strip_size_kb": 0, 00:20:39.612 "state": "online", 00:20:39.612 "raid_level": "raid1", 00:20:39.612 "superblock": false, 00:20:39.612 "num_base_bdevs": 4, 00:20:39.612 "num_base_bdevs_discovered": 3, 00:20:39.612 "num_base_bdevs_operational": 3, 00:20:39.612 "base_bdevs_list": [ 00:20:39.612 { 00:20:39.612 "name": "spare", 00:20:39.612 "uuid": "1510b900-0653-5513-9e3f-30f13b354dc4", 00:20:39.612 "is_configured": true, 00:20:39.612 "data_offset": 0, 00:20:39.612 "data_size": 65536 00:20:39.612 }, 00:20:39.612 { 00:20:39.612 "name": null, 00:20:39.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.612 "is_configured": false, 00:20:39.612 "data_offset": 0, 00:20:39.612 "data_size": 65536 00:20:39.612 }, 00:20:39.612 { 00:20:39.612 "name": "BaseBdev3", 00:20:39.612 "uuid": "d7722dc4-5112-4a1e-93d2-af3121aecf23", 00:20:39.612 "is_configured": true, 00:20:39.612 "data_offset": 0, 00:20:39.612 "data_size": 65536 00:20:39.612 }, 00:20:39.612 { 00:20:39.612 "name": "BaseBdev4", 00:20:39.612 "uuid": "f59f83d3-2912-4c07-9acd-9b777e724514", 00:20:39.612 "is_configured": true, 00:20:39.612 "data_offset": 0, 00:20:39.612 "data_size": 65536 00:20:39.612 } 00:20:39.612 ] 00:20:39.612 }' 00:20:39.612 05:01:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:39.612 05:01:02 -- common/autotest_common.sh@10 -- # set +x 00:20:39.872 05:01:03 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:39.872 [2024-11-18 05:01:03.389453] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:39.872 [2024-11-18 05:01:03.389505] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:39.872 [2024-11-18 05:01:03.389580] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.872 [2024-11-18 05:01:03.389656] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.872 [2024-11-18 05:01:03.389673] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:20:40.131 05:01:03 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.131 05:01:03 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:40.390 05:01:03 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:40.390 05:01:03 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:40.390 05:01:03 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:40.390 05:01:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:40.390 05:01:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:40.390 05:01:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:40.390 05:01:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:40.390 05:01:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:40.390 05:01:03 -- bdev/nbd_common.sh@12 -- # local i 00:20:40.390 05:01:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:40.390 05:01:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:40.390 05:01:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:40.390 /dev/nbd0 00:20:40.650 05:01:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:40.650 05:01:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:40.650 05:01:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:40.650 05:01:03 -- common/autotest_common.sh@867 -- # local i 00:20:40.650 05:01:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:40.650 05:01:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:40.650 05:01:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:40.650 05:01:03 -- common/autotest_common.sh@871 -- # break 00:20:40.650 05:01:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:40.650 05:01:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:40.650 05:01:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:40.650 1+0 records in 00:20:40.650 1+0 records out 00:20:40.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024528 s, 16.7 MB/s 00:20:40.650 05:01:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:40.650 05:01:03 -- common/autotest_common.sh@884 -- # size=4096 00:20:40.650 05:01:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:40.650 05:01:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:40.650 05:01:03 -- common/autotest_common.sh@887 -- # return 0 00:20:40.650 05:01:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:40.650 05:01:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:40.650 05:01:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:40.650 /dev/nbd1 00:20:40.650 05:01:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:40.650 05:01:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:40.650 05:01:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:40.650 05:01:04 -- common/autotest_common.sh@867 -- # local i 00:20:40.650 05:01:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:40.650 05:01:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:40.650 05:01:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:40.650 05:01:04 -- common/autotest_common.sh@871 -- # break 00:20:40.650 05:01:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:40.650 05:01:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:40.650 05:01:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:40.650 1+0 records in 00:20:40.650 1+0 records out 00:20:40.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215981 s, 19.0 MB/s 00:20:40.650 05:01:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:40.650 05:01:04 -- common/autotest_common.sh@884 -- # size=4096 00:20:40.650 05:01:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:40.650 05:01:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:40.650 05:01:04 -- common/autotest_common.sh@887 -- # return 0 00:20:40.650 05:01:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:40.650 05:01:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:40.650 05:01:04 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:40.909 05:01:04 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:40.909 05:01:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:40.909 05:01:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:40.909 05:01:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:40.909 05:01:04 -- bdev/nbd_common.sh@51 -- # local i 00:20:40.909 05:01:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:40.909 05:01:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:41.168 05:01:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:41.168 05:01:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:41.168 05:01:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:41.168 05:01:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:41.169 05:01:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:41.169 05:01:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:41.169 05:01:04 -- bdev/nbd_common.sh@41 -- # break 00:20:41.169 05:01:04 -- bdev/nbd_common.sh@45 -- # return 0 00:20:41.169 05:01:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:41.169 05:01:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:41.428 05:01:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:41.428 05:01:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:41.428 05:01:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:41.428 05:01:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:41.428 05:01:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:41.428 05:01:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:41.428 05:01:04 -- bdev/nbd_common.sh@41 -- # break 00:20:41.428 05:01:04 -- bdev/nbd_common.sh@45 -- # return 0 00:20:41.428 05:01:04 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:41.428 05:01:04 -- bdev/bdev_raid.sh@709 -- # killprocess 80237 00:20:41.428 05:01:04 -- common/autotest_common.sh@936 -- # '[' -z 80237 ']' 00:20:41.428 05:01:04 -- common/autotest_common.sh@940 -- # kill -0 80237 00:20:41.428 05:01:04 -- common/autotest_common.sh@941 -- # uname 00:20:41.428 05:01:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:41.428 05:01:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80237 00:20:41.428 05:01:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:41.428 05:01:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:41.428 05:01:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80237' 00:20:41.428 killing process with pid 80237 00:20:41.428 05:01:04 -- common/autotest_common.sh@955 -- # kill 80237 00:20:41.428 Received shutdown signal, test time was about 60.000000 seconds 00:20:41.428 00:20:41.428 Latency(us) 00:20:41.428 [2024-11-18T05:01:04.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.428 [2024-11-18T05:01:04.952Z] =================================================================================================================== 00:20:41.428 [2024-11-18T05:01:04.952Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.428 [2024-11-18 05:01:04.808759] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:41.428 05:01:04 -- common/autotest_common.sh@960 -- # wait 80237 00:20:41.687 [2024-11-18 05:01:05.121799] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:42.625 00:20:42.625 real 0m20.208s 00:20:42.625 user 0m25.840s 00:20:42.625 sys 0m3.637s 00:20:42.625 05:01:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:42.625 05:01:06 -- common/autotest_common.sh@10 -- # set +x 00:20:42.625 ************************************ 00:20:42.625 END TEST raid_rebuild_test 00:20:42.625 ************************************ 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:20:42.625 05:01:06 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:42.625 05:01:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.625 05:01:06 -- common/autotest_common.sh@10 -- # set +x 00:20:42.625 ************************************ 00:20:42.625 START TEST raid_rebuild_test_sb 00:20:42.625 ************************************ 00:20:42.625 05:01:06 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true false 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@544 -- # raid_pid=80734 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:42.625 05:01:06 -- bdev/bdev_raid.sh@545 -- # waitforlisten 80734 /var/tmp/spdk-raid.sock 00:20:42.625 05:01:06 -- common/autotest_common.sh@829 -- # '[' -z 80734 ']' 00:20:42.625 05:01:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:42.625 05:01:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:42.625 05:01:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:42.625 05:01:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.625 05:01:06 -- common/autotest_common.sh@10 -- # set +x 00:20:42.885 [2024-11-18 05:01:06.148758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:42.885 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:42.885 Zero copy mechanism will not be used. 00:20:42.885 [2024-11-18 05:01:06.148967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80734 ] 00:20:42.885 [2024-11-18 05:01:06.318883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.146 [2024-11-18 05:01:06.470925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.146 [2024-11-18 05:01:06.617081] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.716 05:01:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.716 05:01:06 -- common/autotest_common.sh@862 -- # return 0 00:20:43.716 05:01:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:43.716 05:01:06 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:43.716 05:01:06 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:43.716 BaseBdev1_malloc 00:20:43.716 05:01:07 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:43.975 [2024-11-18 05:01:07.305858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:43.975 [2024-11-18 05:01:07.305941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.975 [2024-11-18 05:01:07.305973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:20:43.975 [2024-11-18 05:01:07.305990] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.975 [2024-11-18 05:01:07.308232] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.975 [2024-11-18 05:01:07.308289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:43.975 BaseBdev1 00:20:43.975 05:01:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:43.975 05:01:07 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:43.975 05:01:07 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:44.234 BaseBdev2_malloc 00:20:44.234 05:01:07 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:44.493 [2024-11-18 05:01:07.805319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:44.493 [2024-11-18 05:01:07.805417] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.493 [2024-11-18 05:01:07.805456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:20:44.493 [2024-11-18 05:01:07.805476] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.493 [2024-11-18 05:01:07.807649] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.493 [2024-11-18 05:01:07.807691] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:44.493 BaseBdev2 00:20:44.493 05:01:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:44.493 05:01:07 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:44.493 05:01:07 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:44.752 BaseBdev3_malloc 00:20:44.752 05:01:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:44.752 [2024-11-18 05:01:08.256765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:44.752 [2024-11-18 05:01:08.256841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.752 [2024-11-18 05:01:08.256867] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:20:44.752 [2024-11-18 05:01:08.256883] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.752 [2024-11-18 05:01:08.258972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.752 [2024-11-18 05:01:08.259013] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:44.752 BaseBdev3 00:20:44.752 05:01:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:44.752 05:01:08 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:44.752 05:01:08 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:45.012 BaseBdev4_malloc 00:20:45.012 05:01:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:45.270 [2024-11-18 05:01:08.684882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:45.270 [2024-11-18 05:01:08.684957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.271 [2024-11-18 05:01:08.684986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:20:45.271 [2024-11-18 05:01:08.685001] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.271 [2024-11-18 05:01:08.687182] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.271 [2024-11-18 05:01:08.687249] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:45.271 BaseBdev4 00:20:45.271 05:01:08 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:45.529 spare_malloc 00:20:45.529 05:01:08 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:45.789 spare_delay 00:20:45.789 05:01:09 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:45.789 [2024-11-18 05:01:09.253756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:45.789 [2024-11-18 05:01:09.253841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.789 [2024-11-18 05:01:09.253871] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:20:45.789 [2024-11-18 05:01:09.253887] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.789 [2024-11-18 05:01:09.255981] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.789 [2024-11-18 05:01:09.256023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:45.789 spare 00:20:45.789 05:01:09 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:46.048 [2024-11-18 05:01:09.433836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.048 [2024-11-18 05:01:09.435766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.048 [2024-11-18 05:01:09.435846] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.048 [2024-11-18 05:01:09.435908] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:46.048 [2024-11-18 05:01:09.436162] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:20:46.048 [2024-11-18 05:01:09.436218] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:46.048 [2024-11-18 05:01:09.436335] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:46.048 [2024-11-18 05:01:09.436714] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:20:46.048 [2024-11-18 05:01:09.436739] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:20:46.048 [2024-11-18 05:01:09.436921] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.048 05:01:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.307 05:01:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:46.307 "name": "raid_bdev1", 00:20:46.307 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:20:46.307 "strip_size_kb": 0, 00:20:46.307 "state": "online", 00:20:46.307 "raid_level": "raid1", 00:20:46.307 "superblock": true, 00:20:46.307 "num_base_bdevs": 4, 00:20:46.307 "num_base_bdevs_discovered": 4, 00:20:46.307 "num_base_bdevs_operational": 4, 00:20:46.307 "base_bdevs_list": [ 00:20:46.307 { 00:20:46.307 "name": "BaseBdev1", 00:20:46.307 "uuid": "37b6a468-dfc7-531c-8507-1d823ef642b7", 00:20:46.307 "is_configured": true, 00:20:46.307 "data_offset": 2048, 00:20:46.307 "data_size": 63488 00:20:46.307 }, 00:20:46.307 { 00:20:46.307 "name": "BaseBdev2", 00:20:46.307 "uuid": "715fa263-c233-517c-9543-28c51a50ab5a", 00:20:46.307 "is_configured": true, 00:20:46.307 "data_offset": 2048, 00:20:46.307 "data_size": 63488 00:20:46.307 }, 00:20:46.307 { 00:20:46.307 "name": "BaseBdev3", 00:20:46.307 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:20:46.307 "is_configured": true, 00:20:46.307 "data_offset": 2048, 00:20:46.307 "data_size": 63488 00:20:46.307 }, 00:20:46.307 { 00:20:46.307 "name": "BaseBdev4", 00:20:46.307 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:20:46.307 "is_configured": true, 00:20:46.307 "data_offset": 2048, 00:20:46.307 "data_size": 63488 00:20:46.307 } 00:20:46.307 ] 00:20:46.307 }' 00:20:46.307 05:01:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:46.307 05:01:09 -- common/autotest_common.sh@10 -- # set +x 00:20:46.566 05:01:09 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:46.566 05:01:09 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:46.824 [2024-11-18 05:01:10.166223] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.825 05:01:10 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:46.825 05:01:10 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.825 05:01:10 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:47.083 05:01:10 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:47.083 05:01:10 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:47.083 05:01:10 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:47.083 05:01:10 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:47.083 05:01:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:47.083 05:01:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:47.083 05:01:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:47.083 05:01:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:47.083 05:01:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:47.083 05:01:10 -- bdev/nbd_common.sh@12 -- # local i 00:20:47.083 05:01:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:47.083 05:01:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.083 05:01:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:47.083 [2024-11-18 05:01:10.602115] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:20:47.341 /dev/nbd0 00:20:47.341 05:01:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:47.341 05:01:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:47.341 05:01:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:47.341 05:01:10 -- common/autotest_common.sh@867 -- # local i 00:20:47.341 05:01:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:47.342 05:01:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:47.342 05:01:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:47.342 05:01:10 -- common/autotest_common.sh@871 -- # break 00:20:47.342 05:01:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:47.342 05:01:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:47.342 05:01:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.342 1+0 records in 00:20:47.342 1+0 records out 00:20:47.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304847 s, 13.4 MB/s 00:20:47.342 05:01:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.342 05:01:10 -- common/autotest_common.sh@884 -- # size=4096 00:20:47.342 05:01:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.342 05:01:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:47.342 05:01:10 -- common/autotest_common.sh@887 -- # return 0 00:20:47.342 05:01:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.342 05:01:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.342 05:01:10 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:47.342 05:01:10 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:47.342 05:01:10 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:53.903 63488+0 records in 00:20:53.903 63488+0 records out 00:20:53.903 32505856 bytes (33 MB, 31 MiB) copied, 6.61208 s, 4.9 MB/s 00:20:53.903 05:01:17 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:53.903 05:01:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:53.903 05:01:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:53.903 05:01:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:53.903 05:01:17 -- bdev/nbd_common.sh@51 -- # local i 00:20:53.903 05:01:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.903 05:01:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:54.162 05:01:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:54.162 05:01:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:54.162 05:01:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:54.162 05:01:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.162 05:01:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.162 05:01:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:54.162 [2024-11-18 05:01:17.504059] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.162 05:01:17 -- bdev/nbd_common.sh@41 -- # break 00:20:54.162 05:01:17 -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.162 05:01:17 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:54.420 [2024-11-18 05:01:17.744995] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.420 05:01:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.679 05:01:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:54.679 "name": "raid_bdev1", 00:20:54.679 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:20:54.679 "strip_size_kb": 0, 00:20:54.679 "state": "online", 00:20:54.679 "raid_level": "raid1", 00:20:54.679 "superblock": true, 00:20:54.679 "num_base_bdevs": 4, 00:20:54.679 "num_base_bdevs_discovered": 3, 00:20:54.679 "num_base_bdevs_operational": 3, 00:20:54.679 "base_bdevs_list": [ 00:20:54.679 { 00:20:54.679 "name": null, 00:20:54.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.679 "is_configured": false, 00:20:54.679 "data_offset": 2048, 00:20:54.679 "data_size": 63488 00:20:54.679 }, 00:20:54.679 { 00:20:54.679 "name": "BaseBdev2", 00:20:54.679 "uuid": "715fa263-c233-517c-9543-28c51a50ab5a", 00:20:54.679 "is_configured": true, 00:20:54.679 "data_offset": 2048, 00:20:54.679 "data_size": 63488 00:20:54.679 }, 00:20:54.679 { 00:20:54.679 "name": "BaseBdev3", 00:20:54.679 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:20:54.679 "is_configured": true, 00:20:54.679 "data_offset": 2048, 00:20:54.679 "data_size": 63488 00:20:54.679 }, 00:20:54.679 { 00:20:54.679 "name": "BaseBdev4", 00:20:54.679 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:20:54.679 "is_configured": true, 00:20:54.679 "data_offset": 2048, 00:20:54.679 "data_size": 63488 00:20:54.679 } 00:20:54.679 ] 00:20:54.679 }' 00:20:54.679 05:01:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:54.679 05:01:17 -- common/autotest_common.sh@10 -- # set +x 00:20:54.679 05:01:18 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:54.937 [2024-11-18 05:01:18.437149] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:54.938 [2024-11-18 05:01:18.437255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:54.938 [2024-11-18 05:01:18.447780] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2db0 00:20:54.938 [2024-11-18 05:01:18.450014] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:55.196 05:01:18 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:56.130 05:01:19 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.131 05:01:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.131 05:01:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:56.131 05:01:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:56.131 05:01:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.131 05:01:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.131 05:01:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.389 05:01:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:56.389 "name": "raid_bdev1", 00:20:56.389 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:20:56.389 "strip_size_kb": 0, 00:20:56.389 "state": "online", 00:20:56.389 "raid_level": "raid1", 00:20:56.389 "superblock": true, 00:20:56.389 "num_base_bdevs": 4, 00:20:56.389 "num_base_bdevs_discovered": 4, 00:20:56.389 "num_base_bdevs_operational": 4, 00:20:56.389 "process": { 00:20:56.389 "type": "rebuild", 00:20:56.389 "target": "spare", 00:20:56.389 "progress": { 00:20:56.389 "blocks": 24576, 00:20:56.389 "percent": 38 00:20:56.389 } 00:20:56.389 }, 00:20:56.389 "base_bdevs_list": [ 00:20:56.389 { 00:20:56.389 "name": "spare", 00:20:56.389 "uuid": "dddd4e9e-7dcd-5dea-b6a5-52a8dd224842", 00:20:56.389 "is_configured": true, 00:20:56.389 "data_offset": 2048, 00:20:56.389 "data_size": 63488 00:20:56.389 }, 00:20:56.389 { 00:20:56.389 "name": "BaseBdev2", 00:20:56.389 "uuid": "715fa263-c233-517c-9543-28c51a50ab5a", 00:20:56.389 "is_configured": true, 00:20:56.389 "data_offset": 2048, 00:20:56.389 "data_size": 63488 00:20:56.389 }, 00:20:56.389 { 00:20:56.389 "name": "BaseBdev3", 00:20:56.389 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:20:56.389 "is_configured": true, 00:20:56.389 "data_offset": 2048, 00:20:56.389 "data_size": 63488 00:20:56.389 }, 00:20:56.389 { 00:20:56.389 "name": "BaseBdev4", 00:20:56.389 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:20:56.389 "is_configured": true, 00:20:56.389 "data_offset": 2048, 00:20:56.389 "data_size": 63488 00:20:56.389 } 00:20:56.389 ] 00:20:56.389 }' 00:20:56.389 05:01:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:56.389 05:01:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.389 05:01:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:56.389 05:01:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.389 05:01:19 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:56.648 [2024-11-18 05:01:19.952379] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:56.648 [2024-11-18 05:01:19.956343] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:56.648 [2024-11-18 05:01:19.956418] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.648 05:01:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.908 05:01:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:56.908 "name": "raid_bdev1", 00:20:56.908 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:20:56.908 "strip_size_kb": 0, 00:20:56.908 "state": "online", 00:20:56.908 "raid_level": "raid1", 00:20:56.908 "superblock": true, 00:20:56.908 "num_base_bdevs": 4, 00:20:56.908 "num_base_bdevs_discovered": 3, 00:20:56.908 "num_base_bdevs_operational": 3, 00:20:56.908 "base_bdevs_list": [ 00:20:56.908 { 00:20:56.908 "name": null, 00:20:56.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.908 "is_configured": false, 00:20:56.908 "data_offset": 2048, 00:20:56.908 "data_size": 63488 00:20:56.908 }, 00:20:56.908 { 00:20:56.908 "name": "BaseBdev2", 00:20:56.908 "uuid": "715fa263-c233-517c-9543-28c51a50ab5a", 00:20:56.908 "is_configured": true, 00:20:56.908 "data_offset": 2048, 00:20:56.908 "data_size": 63488 00:20:56.908 }, 00:20:56.908 { 00:20:56.908 "name": "BaseBdev3", 00:20:56.908 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:20:56.908 "is_configured": true, 00:20:56.908 "data_offset": 2048, 00:20:56.908 "data_size": 63488 00:20:56.908 }, 00:20:56.908 { 00:20:56.908 "name": "BaseBdev4", 00:20:56.908 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:20:56.908 "is_configured": true, 00:20:56.908 "data_offset": 2048, 00:20:56.908 "data_size": 63488 00:20:56.908 } 00:20:56.908 ] 00:20:56.908 }' 00:20:56.908 05:01:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:56.908 05:01:20 -- common/autotest_common.sh@10 -- # set +x 00:20:57.167 05:01:20 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:57.167 05:01:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:57.167 05:01:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:57.167 05:01:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:57.167 05:01:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:57.167 05:01:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.167 05:01:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.426 05:01:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:57.426 "name": "raid_bdev1", 00:20:57.426 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:20:57.426 "strip_size_kb": 0, 00:20:57.426 "state": "online", 00:20:57.426 "raid_level": "raid1", 00:20:57.426 "superblock": true, 00:20:57.426 "num_base_bdevs": 4, 00:20:57.426 "num_base_bdevs_discovered": 3, 00:20:57.426 "num_base_bdevs_operational": 3, 00:20:57.426 "base_bdevs_list": [ 00:20:57.426 { 00:20:57.426 "name": null, 00:20:57.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.426 "is_configured": false, 00:20:57.426 "data_offset": 2048, 00:20:57.426 "data_size": 63488 00:20:57.426 }, 00:20:57.426 { 00:20:57.426 "name": "BaseBdev2", 00:20:57.426 "uuid": "715fa263-c233-517c-9543-28c51a50ab5a", 00:20:57.426 "is_configured": true, 00:20:57.426 "data_offset": 2048, 00:20:57.426 "data_size": 63488 00:20:57.426 }, 00:20:57.426 { 00:20:57.426 "name": "BaseBdev3", 00:20:57.426 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:20:57.426 "is_configured": true, 00:20:57.426 "data_offset": 2048, 00:20:57.426 "data_size": 63488 00:20:57.426 }, 00:20:57.426 { 00:20:57.426 "name": "BaseBdev4", 00:20:57.426 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:20:57.426 "is_configured": true, 00:20:57.426 "data_offset": 2048, 00:20:57.426 "data_size": 63488 00:20:57.426 } 00:20:57.426 ] 00:20:57.426 }' 00:20:57.426 05:01:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:57.426 05:01:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:57.426 05:01:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:57.426 05:01:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:57.426 05:01:20 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:57.426 [2024-11-18 05:01:20.926883] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:57.426 [2024-11-18 05:01:20.926922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:57.426 [2024-11-18 05:01:20.937044] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2e80 00:20:57.426 [2024-11-18 05:01:20.939124] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:57.685 05:01:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:58.620 05:01:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.620 05:01:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:58.620 05:01:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:58.620 05:01:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:58.620 05:01:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:58.620 05:01:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.620 05:01:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:58.878 "name": "raid_bdev1", 00:20:58.878 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:20:58.878 "strip_size_kb": 0, 00:20:58.878 "state": "online", 00:20:58.878 "raid_level": "raid1", 00:20:58.878 "superblock": true, 00:20:58.878 "num_base_bdevs": 4, 00:20:58.878 "num_base_bdevs_discovered": 4, 00:20:58.878 "num_base_bdevs_operational": 4, 00:20:58.878 "process": { 00:20:58.878 "type": "rebuild", 00:20:58.878 "target": "spare", 00:20:58.878 "progress": { 00:20:58.878 "blocks": 24576, 00:20:58.878 "percent": 38 00:20:58.878 } 00:20:58.878 }, 00:20:58.878 "base_bdevs_list": [ 00:20:58.878 { 00:20:58.878 "name": "spare", 00:20:58.878 "uuid": "dddd4e9e-7dcd-5dea-b6a5-52a8dd224842", 00:20:58.878 "is_configured": true, 00:20:58.878 "data_offset": 2048, 00:20:58.878 "data_size": 63488 00:20:58.878 }, 00:20:58.878 { 00:20:58.878 "name": "BaseBdev2", 00:20:58.878 "uuid": "715fa263-c233-517c-9543-28c51a50ab5a", 00:20:58.878 "is_configured": true, 00:20:58.878 "data_offset": 2048, 00:20:58.878 "data_size": 63488 00:20:58.878 }, 00:20:58.878 { 00:20:58.878 "name": "BaseBdev3", 00:20:58.878 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:20:58.878 "is_configured": true, 00:20:58.878 "data_offset": 2048, 00:20:58.878 "data_size": 63488 00:20:58.878 }, 00:20:58.878 { 00:20:58.878 "name": "BaseBdev4", 00:20:58.878 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:20:58.878 "is_configured": true, 00:20:58.878 "data_offset": 2048, 00:20:58.878 "data_size": 63488 00:20:58.878 } 00:20:58.878 ] 00:20:58.878 }' 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:58.878 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:58.878 05:01:22 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:59.136 [2024-11-18 05:01:22.433440] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:59.136 [2024-11-18 05:01:22.445313] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000ca2e80 00:20:59.136 05:01:22 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:59.136 05:01:22 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:59.137 05:01:22 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.137 05:01:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.137 05:01:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:59.137 05:01:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:59.137 05:01:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.137 05:01:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.137 05:01:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.395 05:01:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:59.396 "name": "raid_bdev1", 00:20:59.396 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:20:59.396 "strip_size_kb": 0, 00:20:59.396 "state": "online", 00:20:59.396 "raid_level": "raid1", 00:20:59.396 "superblock": true, 00:20:59.396 "num_base_bdevs": 4, 00:20:59.396 "num_base_bdevs_discovered": 3, 00:20:59.396 "num_base_bdevs_operational": 3, 00:20:59.396 "process": { 00:20:59.396 "type": "rebuild", 00:20:59.396 "target": "spare", 00:20:59.396 "progress": { 00:20:59.396 "blocks": 36864, 00:20:59.396 "percent": 58 00:20:59.396 } 00:20:59.396 }, 00:20:59.396 "base_bdevs_list": [ 00:20:59.396 { 00:20:59.396 "name": "spare", 00:20:59.396 "uuid": "dddd4e9e-7dcd-5dea-b6a5-52a8dd224842", 00:20:59.396 "is_configured": true, 00:20:59.396 "data_offset": 2048, 00:20:59.396 "data_size": 63488 00:20:59.396 }, 00:20:59.396 { 00:20:59.396 "name": null, 00:20:59.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.396 "is_configured": false, 00:20:59.396 "data_offset": 2048, 00:20:59.396 "data_size": 63488 00:20:59.396 }, 00:20:59.396 { 00:20:59.396 "name": "BaseBdev3", 00:20:59.396 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:20:59.396 "is_configured": true, 00:20:59.396 "data_offset": 2048, 00:20:59.396 "data_size": 63488 00:20:59.396 }, 00:20:59.396 { 00:20:59.396 "name": "BaseBdev4", 00:20:59.396 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:20:59.396 "is_configured": true, 00:20:59.396 "data_offset": 2048, 00:20:59.396 "data_size": 63488 00:20:59.396 } 00:20:59.396 ] 00:20:59.396 }' 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@657 -- # local timeout=456 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.396 05:01:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.666 05:01:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:59.666 "name": "raid_bdev1", 00:20:59.666 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:20:59.666 "strip_size_kb": 0, 00:20:59.666 "state": "online", 00:20:59.666 "raid_level": "raid1", 00:20:59.666 "superblock": true, 00:20:59.666 "num_base_bdevs": 4, 00:20:59.666 "num_base_bdevs_discovered": 3, 00:20:59.666 "num_base_bdevs_operational": 3, 00:20:59.666 "process": { 00:20:59.666 "type": "rebuild", 00:20:59.666 "target": "spare", 00:20:59.666 "progress": { 00:20:59.666 "blocks": 43008, 00:20:59.666 "percent": 67 00:20:59.666 } 00:20:59.666 }, 00:20:59.666 "base_bdevs_list": [ 00:20:59.666 { 00:20:59.666 "name": "spare", 00:20:59.666 "uuid": "dddd4e9e-7dcd-5dea-b6a5-52a8dd224842", 00:20:59.666 "is_configured": true, 00:20:59.666 "data_offset": 2048, 00:20:59.666 "data_size": 63488 00:20:59.666 }, 00:20:59.666 { 00:20:59.666 "name": null, 00:20:59.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.666 "is_configured": false, 00:20:59.666 "data_offset": 2048, 00:20:59.666 "data_size": 63488 00:20:59.666 }, 00:20:59.666 { 00:20:59.666 "name": "BaseBdev3", 00:20:59.666 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:20:59.666 "is_configured": true, 00:20:59.666 "data_offset": 2048, 00:20:59.666 "data_size": 63488 00:20:59.666 }, 00:20:59.666 { 00:20:59.666 "name": "BaseBdev4", 00:20:59.666 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:20:59.666 "is_configured": true, 00:20:59.666 "data_offset": 2048, 00:20:59.666 "data_size": 63488 00:20:59.666 } 00:20:59.666 ] 00:20:59.666 }' 00:20:59.666 05:01:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:59.666 05:01:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.666 05:01:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:59.666 05:01:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.666 05:01:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:00.597 [2024-11-18 05:01:24.052203] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:00.597 [2024-11-18 05:01:24.052270] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:00.597 [2024-11-18 05:01:24.052399] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.597 05:01:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:00.597 05:01:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.597 05:01:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:00.597 05:01:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:00.597 05:01:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:00.597 05:01:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:00.597 05:01:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.597 05:01:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.855 05:01:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:00.855 "name": "raid_bdev1", 00:21:00.855 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:21:00.855 "strip_size_kb": 0, 00:21:00.855 "state": "online", 00:21:00.855 "raid_level": "raid1", 00:21:00.855 "superblock": true, 00:21:00.856 "num_base_bdevs": 4, 00:21:00.856 "num_base_bdevs_discovered": 3, 00:21:00.856 "num_base_bdevs_operational": 3, 00:21:00.856 "base_bdevs_list": [ 00:21:00.856 { 00:21:00.856 "name": "spare", 00:21:00.856 "uuid": "dddd4e9e-7dcd-5dea-b6a5-52a8dd224842", 00:21:00.856 "is_configured": true, 00:21:00.856 "data_offset": 2048, 00:21:00.856 "data_size": 63488 00:21:00.856 }, 00:21:00.856 { 00:21:00.856 "name": null, 00:21:00.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.856 "is_configured": false, 00:21:00.856 "data_offset": 2048, 00:21:00.856 "data_size": 63488 00:21:00.856 }, 00:21:00.856 { 00:21:00.856 "name": "BaseBdev3", 00:21:00.856 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:21:00.856 "is_configured": true, 00:21:00.856 "data_offset": 2048, 00:21:00.856 "data_size": 63488 00:21:00.856 }, 00:21:00.856 { 00:21:00.856 "name": "BaseBdev4", 00:21:00.856 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:21:00.856 "is_configured": true, 00:21:00.856 "data_offset": 2048, 00:21:00.856 "data_size": 63488 00:21:00.856 } 00:21:00.856 ] 00:21:00.856 }' 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@660 -- # break 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.856 05:01:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:01.115 "name": "raid_bdev1", 00:21:01.115 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:21:01.115 "strip_size_kb": 0, 00:21:01.115 "state": "online", 00:21:01.115 "raid_level": "raid1", 00:21:01.115 "superblock": true, 00:21:01.115 "num_base_bdevs": 4, 00:21:01.115 "num_base_bdevs_discovered": 3, 00:21:01.115 "num_base_bdevs_operational": 3, 00:21:01.115 "base_bdevs_list": [ 00:21:01.115 { 00:21:01.115 "name": "spare", 00:21:01.115 "uuid": "dddd4e9e-7dcd-5dea-b6a5-52a8dd224842", 00:21:01.115 "is_configured": true, 00:21:01.115 "data_offset": 2048, 00:21:01.115 "data_size": 63488 00:21:01.115 }, 00:21:01.115 { 00:21:01.115 "name": null, 00:21:01.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.115 "is_configured": false, 00:21:01.115 "data_offset": 2048, 00:21:01.115 "data_size": 63488 00:21:01.115 }, 00:21:01.115 { 00:21:01.115 "name": "BaseBdev3", 00:21:01.115 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:21:01.115 "is_configured": true, 00:21:01.115 "data_offset": 2048, 00:21:01.115 "data_size": 63488 00:21:01.115 }, 00:21:01.115 { 00:21:01.115 "name": "BaseBdev4", 00:21:01.115 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:21:01.115 "is_configured": true, 00:21:01.115 "data_offset": 2048, 00:21:01.115 "data_size": 63488 00:21:01.115 } 00:21:01.115 ] 00:21:01.115 }' 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.115 05:01:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.376 05:01:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:01.376 "name": "raid_bdev1", 00:21:01.376 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:21:01.376 "strip_size_kb": 0, 00:21:01.377 "state": "online", 00:21:01.377 "raid_level": "raid1", 00:21:01.377 "superblock": true, 00:21:01.377 "num_base_bdevs": 4, 00:21:01.377 "num_base_bdevs_discovered": 3, 00:21:01.377 "num_base_bdevs_operational": 3, 00:21:01.377 "base_bdevs_list": [ 00:21:01.377 { 00:21:01.377 "name": "spare", 00:21:01.377 "uuid": "dddd4e9e-7dcd-5dea-b6a5-52a8dd224842", 00:21:01.377 "is_configured": true, 00:21:01.377 "data_offset": 2048, 00:21:01.377 "data_size": 63488 00:21:01.377 }, 00:21:01.377 { 00:21:01.377 "name": null, 00:21:01.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.377 "is_configured": false, 00:21:01.377 "data_offset": 2048, 00:21:01.377 "data_size": 63488 00:21:01.377 }, 00:21:01.377 { 00:21:01.377 "name": "BaseBdev3", 00:21:01.377 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:21:01.377 "is_configured": true, 00:21:01.377 "data_offset": 2048, 00:21:01.377 "data_size": 63488 00:21:01.377 }, 00:21:01.377 { 00:21:01.377 "name": "BaseBdev4", 00:21:01.377 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:21:01.377 "is_configured": true, 00:21:01.377 "data_offset": 2048, 00:21:01.377 "data_size": 63488 00:21:01.377 } 00:21:01.377 ] 00:21:01.377 }' 00:21:01.377 05:01:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:01.377 05:01:24 -- common/autotest_common.sh@10 -- # set +x 00:21:01.636 05:01:25 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:01.895 [2024-11-18 05:01:25.354740] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.895 [2024-11-18 05:01:25.354772] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.895 [2024-11-18 05:01:25.354861] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.895 [2024-11-18 05:01:25.354953] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.895 [2024-11-18 05:01:25.354968] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:21:01.895 05:01:25 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.895 05:01:25 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:02.154 05:01:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:02.154 05:01:25 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:02.154 05:01:25 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:02.154 05:01:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:02.154 05:01:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:02.154 05:01:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:02.154 05:01:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:02.154 05:01:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:02.154 05:01:25 -- bdev/nbd_common.sh@12 -- # local i 00:21:02.154 05:01:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:02.154 05:01:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:02.154 05:01:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:02.413 /dev/nbd0 00:21:02.413 05:01:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:02.413 05:01:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:02.413 05:01:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:02.413 05:01:25 -- common/autotest_common.sh@867 -- # local i 00:21:02.413 05:01:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:02.413 05:01:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:02.413 05:01:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:02.414 05:01:25 -- common/autotest_common.sh@871 -- # break 00:21:02.414 05:01:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:02.414 05:01:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:02.414 05:01:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:02.414 1+0 records in 00:21:02.414 1+0 records out 00:21:02.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224528 s, 18.2 MB/s 00:21:02.414 05:01:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.414 05:01:25 -- common/autotest_common.sh@884 -- # size=4096 00:21:02.414 05:01:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.414 05:01:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:02.414 05:01:25 -- common/autotest_common.sh@887 -- # return 0 00:21:02.414 05:01:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:02.414 05:01:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:02.414 05:01:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:02.672 /dev/nbd1 00:21:02.672 05:01:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:02.672 05:01:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:02.672 05:01:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:02.672 05:01:26 -- common/autotest_common.sh@867 -- # local i 00:21:02.672 05:01:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:02.672 05:01:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:02.672 05:01:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:02.672 05:01:26 -- common/autotest_common.sh@871 -- # break 00:21:02.672 05:01:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:02.672 05:01:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:02.672 05:01:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:02.672 1+0 records in 00:21:02.672 1+0 records out 00:21:02.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321861 s, 12.7 MB/s 00:21:02.672 05:01:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.672 05:01:26 -- common/autotest_common.sh@884 -- # size=4096 00:21:02.672 05:01:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.672 05:01:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:02.672 05:01:26 -- common/autotest_common.sh@887 -- # return 0 00:21:02.672 05:01:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:02.672 05:01:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:02.672 05:01:26 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:02.932 05:01:26 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:02.932 05:01:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:02.932 05:01:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:02.932 05:01:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:02.932 05:01:26 -- bdev/nbd_common.sh@51 -- # local i 00:21:02.932 05:01:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.932 05:01:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:03.190 05:01:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:03.190 05:01:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:03.190 05:01:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:03.190 05:01:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.190 05:01:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.190 05:01:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:03.190 05:01:26 -- bdev/nbd_common.sh@41 -- # break 00:21:03.190 05:01:26 -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.190 05:01:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.190 05:01:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:03.449 05:01:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:03.449 05:01:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:03.449 05:01:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:03.449 05:01:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.449 05:01:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.449 05:01:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:03.449 05:01:26 -- bdev/nbd_common.sh@41 -- # break 00:21:03.449 05:01:26 -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.449 05:01:26 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:03.449 05:01:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:03.449 05:01:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:03.449 05:01:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:03.449 05:01:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:03.708 [2024-11-18 05:01:27.144889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:03.708 [2024-11-18 05:01:27.144949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.708 [2024-11-18 05:01:27.144981] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:21:03.708 [2024-11-18 05:01:27.144994] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.708 [2024-11-18 05:01:27.147336] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.708 [2024-11-18 05:01:27.147375] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:03.708 [2024-11-18 05:01:27.147472] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:03.708 [2024-11-18 05:01:27.147532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:03.708 BaseBdev1 00:21:03.708 05:01:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:03.708 05:01:27 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:03.708 05:01:27 -- bdev/bdev_raid.sh@696 -- # continue 00:21:03.708 05:01:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:03.708 05:01:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:03.708 05:01:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:03.967 05:01:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:04.226 [2024-11-18 05:01:27.576967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:04.226 [2024-11-18 05:01:27.577166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.226 [2024-11-18 05:01:27.577278] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:21:04.226 [2024-11-18 05:01:27.577503] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.226 [2024-11-18 05:01:27.577951] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.226 [2024-11-18 05:01:27.577983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:04.226 [2024-11-18 05:01:27.578084] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:04.226 [2024-11-18 05:01:27.578127] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:04.226 [2024-11-18 05:01:27.578142] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.226 [2024-11-18 05:01:27.578166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:21:04.226 [2024-11-18 05:01:27.578259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:04.226 BaseBdev3 00:21:04.226 05:01:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:04.226 05:01:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:04.226 05:01:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:04.486 05:01:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:04.486 [2024-11-18 05:01:27.997038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:04.486 [2024-11-18 05:01:27.997244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.486 [2024-11-18 05:01:27.997310] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:21:04.486 [2024-11-18 05:01:27.997428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.486 [2024-11-18 05:01:27.997875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.486 [2024-11-18 05:01:27.998052] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:04.486 [2024-11-18 05:01:27.998354] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:04.486 [2024-11-18 05:01:27.998559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:04.486 BaseBdev4 00:21:04.745 05:01:28 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:04.745 05:01:28 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:05.004 [2024-11-18 05:01:28.401138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:05.004 [2024-11-18 05:01:28.401377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.004 [2024-11-18 05:01:28.401446] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:21:05.004 [2024-11-18 05:01:28.401564] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.004 [2024-11-18 05:01:28.402083] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.004 [2024-11-18 05:01:28.402155] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:05.004 [2024-11-18 05:01:28.402273] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:05.004 [2024-11-18 05:01:28.402308] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.004 spare 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.004 05:01:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.004 [2024-11-18 05:01:28.502454] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:21:05.004 [2024-11-18 05:01:28.502500] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:05.004 [2024-11-18 05:01:28.502626] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1530 00:21:05.004 [2024-11-18 05:01:28.502977] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:21:05.004 [2024-11-18 05:01:28.502992] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:21:05.004 [2024-11-18 05:01:28.503126] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.263 05:01:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.263 "name": "raid_bdev1", 00:21:05.263 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:21:05.263 "strip_size_kb": 0, 00:21:05.263 "state": "online", 00:21:05.263 "raid_level": "raid1", 00:21:05.263 "superblock": true, 00:21:05.263 "num_base_bdevs": 4, 00:21:05.263 "num_base_bdevs_discovered": 3, 00:21:05.263 "num_base_bdevs_operational": 3, 00:21:05.263 "base_bdevs_list": [ 00:21:05.263 { 00:21:05.263 "name": "spare", 00:21:05.263 "uuid": "dddd4e9e-7dcd-5dea-b6a5-52a8dd224842", 00:21:05.263 "is_configured": true, 00:21:05.263 "data_offset": 2048, 00:21:05.263 "data_size": 63488 00:21:05.263 }, 00:21:05.263 { 00:21:05.263 "name": null, 00:21:05.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.263 "is_configured": false, 00:21:05.263 "data_offset": 2048, 00:21:05.263 "data_size": 63488 00:21:05.263 }, 00:21:05.263 { 00:21:05.263 "name": "BaseBdev3", 00:21:05.263 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:21:05.263 "is_configured": true, 00:21:05.263 "data_offset": 2048, 00:21:05.263 "data_size": 63488 00:21:05.263 }, 00:21:05.263 { 00:21:05.263 "name": "BaseBdev4", 00:21:05.263 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:21:05.263 "is_configured": true, 00:21:05.263 "data_offset": 2048, 00:21:05.263 "data_size": 63488 00:21:05.263 } 00:21:05.263 ] 00:21:05.263 }' 00:21:05.263 05:01:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.263 05:01:28 -- common/autotest_common.sh@10 -- # set +x 00:21:05.522 05:01:28 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:05.522 05:01:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.522 05:01:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:05.522 05:01:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:05.522 05:01:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.522 05:01:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.522 05:01:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.781 05:01:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:05.781 "name": "raid_bdev1", 00:21:05.781 "uuid": "2b121aed-1e9f-4fd3-89f2-b75c80f5498c", 00:21:05.781 "strip_size_kb": 0, 00:21:05.781 "state": "online", 00:21:05.781 "raid_level": "raid1", 00:21:05.781 "superblock": true, 00:21:05.781 "num_base_bdevs": 4, 00:21:05.781 "num_base_bdevs_discovered": 3, 00:21:05.781 "num_base_bdevs_operational": 3, 00:21:05.781 "base_bdevs_list": [ 00:21:05.781 { 00:21:05.781 "name": "spare", 00:21:05.781 "uuid": "dddd4e9e-7dcd-5dea-b6a5-52a8dd224842", 00:21:05.781 "is_configured": true, 00:21:05.781 "data_offset": 2048, 00:21:05.781 "data_size": 63488 00:21:05.781 }, 00:21:05.781 { 00:21:05.781 "name": null, 00:21:05.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.781 "is_configured": false, 00:21:05.781 "data_offset": 2048, 00:21:05.781 "data_size": 63488 00:21:05.781 }, 00:21:05.781 { 00:21:05.781 "name": "BaseBdev3", 00:21:05.781 "uuid": "3c424849-7655-510b-8925-0415c077004b", 00:21:05.781 "is_configured": true, 00:21:05.781 "data_offset": 2048, 00:21:05.781 "data_size": 63488 00:21:05.781 }, 00:21:05.781 { 00:21:05.781 "name": "BaseBdev4", 00:21:05.781 "uuid": "a46bf86d-964c-5f61-a298-32d93cf88f99", 00:21:05.781 "is_configured": true, 00:21:05.781 "data_offset": 2048, 00:21:05.781 "data_size": 63488 00:21:05.781 } 00:21:05.781 ] 00:21:05.781 }' 00:21:05.781 05:01:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:05.781 05:01:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:05.781 05:01:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:05.781 05:01:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:05.781 05:01:29 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.781 05:01:29 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:06.039 05:01:29 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.039 05:01:29 -- bdev/bdev_raid.sh@709 -- # killprocess 80734 00:21:06.039 05:01:29 -- common/autotest_common.sh@936 -- # '[' -z 80734 ']' 00:21:06.039 05:01:29 -- common/autotest_common.sh@940 -- # kill -0 80734 00:21:06.039 05:01:29 -- common/autotest_common.sh@941 -- # uname 00:21:06.039 05:01:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:06.039 05:01:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80734 00:21:06.039 05:01:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:06.039 05:01:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:06.039 05:01:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80734' 00:21:06.039 killing process with pid 80734 00:21:06.039 Received shutdown signal, test time was about 60.000000 seconds 00:21:06.039 00:21:06.039 Latency(us) 00:21:06.039 [2024-11-18T05:01:29.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.039 [2024-11-18T05:01:29.563Z] =================================================================================================================== 00:21:06.039 [2024-11-18T05:01:29.563Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:06.039 05:01:29 -- common/autotest_common.sh@955 -- # kill 80734 00:21:06.039 [2024-11-18 05:01:29.405287] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:06.039 05:01:29 -- common/autotest_common.sh@960 -- # wait 80734 00:21:06.039 [2024-11-18 05:01:29.405424] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.039 [2024-11-18 05:01:29.405507] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.039 [2024-11-18 05:01:29.405524] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:21:06.298 [2024-11-18 05:01:29.721452] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:07.235 00:21:07.235 real 0m24.557s 00:21:07.235 user 0m33.325s 00:21:07.235 sys 0m4.529s 00:21:07.235 05:01:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:07.235 ************************************ 00:21:07.235 END TEST raid_rebuild_test_sb 00:21:07.235 ************************************ 00:21:07.235 05:01:30 -- common/autotest_common.sh@10 -- # set +x 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:21:07.235 05:01:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:07.235 05:01:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:07.235 05:01:30 -- common/autotest_common.sh@10 -- # set +x 00:21:07.235 ************************************ 00:21:07.235 START TEST raid_rebuild_test_io 00:21:07.235 ************************************ 00:21:07.235 05:01:30 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false true 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=81325 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 81325 /var/tmp/spdk-raid.sock 00:21:07.235 05:01:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:07.235 05:01:30 -- common/autotest_common.sh@829 -- # '[' -z 81325 ']' 00:21:07.235 05:01:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:07.235 05:01:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:07.235 05:01:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:07.235 05:01:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.235 05:01:30 -- common/autotest_common.sh@10 -- # set +x 00:21:07.495 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:07.495 Zero copy mechanism will not be used. 00:21:07.495 [2024-11-18 05:01:30.768289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:07.495 [2024-11-18 05:01:30.768464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81325 ] 00:21:07.495 [2024-11-18 05:01:30.935084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.754 [2024-11-18 05:01:31.083692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.754 [2024-11-18 05:01:31.223263] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.323 05:01:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.323 05:01:31 -- common/autotest_common.sh@862 -- # return 0 00:21:08.323 05:01:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:08.323 05:01:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:08.323 05:01:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:08.323 BaseBdev1 00:21:08.323 05:01:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:08.323 05:01:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:08.323 05:01:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:08.582 BaseBdev2 00:21:08.840 05:01:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:08.840 05:01:32 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:08.840 05:01:32 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:08.840 BaseBdev3 00:21:08.840 05:01:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:08.840 05:01:32 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:08.840 05:01:32 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:09.098 BaseBdev4 00:21:09.098 05:01:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:09.357 spare_malloc 00:21:09.357 05:01:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:09.615 spare_delay 00:21:09.615 05:01:32 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:09.615 [2024-11-18 05:01:33.115984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:09.615 [2024-11-18 05:01:33.116061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.615 [2024-11-18 05:01:33.116087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:21:09.615 [2024-11-18 05:01:33.116102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.615 [2024-11-18 05:01:33.118368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.615 [2024-11-18 05:01:33.118407] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:09.615 spare 00:21:09.615 05:01:33 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:09.873 [2024-11-18 05:01:33.348049] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.873 [2024-11-18 05:01:33.349871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:09.873 [2024-11-18 05:01:33.349925] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.873 [2024-11-18 05:01:33.349972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:09.873 [2024-11-18 05:01:33.350039] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:21:09.873 [2024-11-18 05:01:33.350055] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:09.873 [2024-11-18 05:01:33.350296] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:21:09.873 [2024-11-18 05:01:33.350678] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:21:09.873 [2024-11-18 05:01:33.350703] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:21:09.873 [2024-11-18 05:01:33.350898] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.873 05:01:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.132 05:01:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:10.132 "name": "raid_bdev1", 00:21:10.132 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:10.132 "strip_size_kb": 0, 00:21:10.132 "state": "online", 00:21:10.132 "raid_level": "raid1", 00:21:10.132 "superblock": false, 00:21:10.132 "num_base_bdevs": 4, 00:21:10.132 "num_base_bdevs_discovered": 4, 00:21:10.132 "num_base_bdevs_operational": 4, 00:21:10.132 "base_bdevs_list": [ 00:21:10.132 { 00:21:10.132 "name": "BaseBdev1", 00:21:10.132 "uuid": "33e6d58b-6c8d-4435-98c3-206f2ee0defc", 00:21:10.132 "is_configured": true, 00:21:10.132 "data_offset": 0, 00:21:10.132 "data_size": 65536 00:21:10.132 }, 00:21:10.132 { 00:21:10.132 "name": "BaseBdev2", 00:21:10.132 "uuid": "8b80f5aa-4734-4c3c-b15a-12824aa093d9", 00:21:10.132 "is_configured": true, 00:21:10.132 "data_offset": 0, 00:21:10.132 "data_size": 65536 00:21:10.132 }, 00:21:10.132 { 00:21:10.132 "name": "BaseBdev3", 00:21:10.132 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:10.132 "is_configured": true, 00:21:10.132 "data_offset": 0, 00:21:10.132 "data_size": 65536 00:21:10.132 }, 00:21:10.132 { 00:21:10.132 "name": "BaseBdev4", 00:21:10.132 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:10.132 "is_configured": true, 00:21:10.132 "data_offset": 0, 00:21:10.132 "data_size": 65536 00:21:10.132 } 00:21:10.132 ] 00:21:10.132 }' 00:21:10.132 05:01:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:10.132 05:01:33 -- common/autotest_common.sh@10 -- # set +x 00:21:10.392 05:01:33 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:10.392 05:01:33 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:10.651 [2024-11-18 05:01:34.080426] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:10.651 05:01:34 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:10.651 05:01:34 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.651 05:01:34 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:10.910 05:01:34 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:10.910 05:01:34 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:10.910 05:01:34 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:10.910 05:01:34 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:10.910 [2024-11-18 05:01:34.430063] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:21:11.168 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:11.168 Zero copy mechanism will not be used. 00:21:11.168 Running I/O for 60 seconds... 00:21:11.168 [2024-11-18 05:01:34.591705] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.168 [2024-11-18 05:01:34.598393] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:21:11.168 05:01:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:11.168 05:01:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:11.168 05:01:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:11.168 05:01:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:11.168 05:01:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:11.168 05:01:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:11.168 05:01:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:11.169 05:01:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:11.169 05:01:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:11.169 05:01:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:11.169 05:01:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.169 05:01:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.428 05:01:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.428 "name": "raid_bdev1", 00:21:11.428 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:11.428 "strip_size_kb": 0, 00:21:11.428 "state": "online", 00:21:11.428 "raid_level": "raid1", 00:21:11.428 "superblock": false, 00:21:11.428 "num_base_bdevs": 4, 00:21:11.428 "num_base_bdevs_discovered": 3, 00:21:11.428 "num_base_bdevs_operational": 3, 00:21:11.428 "base_bdevs_list": [ 00:21:11.428 { 00:21:11.428 "name": null, 00:21:11.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.428 "is_configured": false, 00:21:11.428 "data_offset": 0, 00:21:11.428 "data_size": 65536 00:21:11.428 }, 00:21:11.428 { 00:21:11.428 "name": "BaseBdev2", 00:21:11.428 "uuid": "8b80f5aa-4734-4c3c-b15a-12824aa093d9", 00:21:11.428 "is_configured": true, 00:21:11.428 "data_offset": 0, 00:21:11.428 "data_size": 65536 00:21:11.428 }, 00:21:11.428 { 00:21:11.428 "name": "BaseBdev3", 00:21:11.428 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:11.428 "is_configured": true, 00:21:11.428 "data_offset": 0, 00:21:11.428 "data_size": 65536 00:21:11.428 }, 00:21:11.428 { 00:21:11.428 "name": "BaseBdev4", 00:21:11.428 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:11.428 "is_configured": true, 00:21:11.428 "data_offset": 0, 00:21:11.428 "data_size": 65536 00:21:11.428 } 00:21:11.428 ] 00:21:11.428 }' 00:21:11.428 05:01:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.428 05:01:34 -- common/autotest_common.sh@10 -- # set +x 00:21:11.687 05:01:35 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:11.946 [2024-11-18 05:01:35.363212] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:11.946 [2024-11-18 05:01:35.363329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:11.946 05:01:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:11.946 [2024-11-18 05:01:35.412551] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:21:11.946 [2024-11-18 05:01:35.414455] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:12.205 [2024-11-18 05:01:35.530883] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:12.205 [2024-11-18 05:01:35.531323] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:12.205 [2024-11-18 05:01:35.646313] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:12.205 [2024-11-18 05:01:35.646518] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:12.773 [2024-11-18 05:01:36.080551] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:13.032 [2024-11-18 05:01:36.320430] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:13.032 05:01:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.032 05:01:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.032 05:01:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:13.032 05:01:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:13.032 05:01:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.032 05:01:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.032 05:01:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.032 [2024-11-18 05:01:36.442148] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:13.032 [2024-11-18 05:01:36.442746] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:13.291 05:01:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.291 "name": "raid_bdev1", 00:21:13.291 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:13.291 "strip_size_kb": 0, 00:21:13.291 "state": "online", 00:21:13.291 "raid_level": "raid1", 00:21:13.291 "superblock": false, 00:21:13.291 "num_base_bdevs": 4, 00:21:13.291 "num_base_bdevs_discovered": 4, 00:21:13.291 "num_base_bdevs_operational": 4, 00:21:13.291 "process": { 00:21:13.291 "type": "rebuild", 00:21:13.291 "target": "spare", 00:21:13.291 "progress": { 00:21:13.291 "blocks": 16384, 00:21:13.291 "percent": 25 00:21:13.291 } 00:21:13.291 }, 00:21:13.291 "base_bdevs_list": [ 00:21:13.291 { 00:21:13.291 "name": "spare", 00:21:13.291 "uuid": "88988234-fe66-5f74-ab4b-10ecfedb0470", 00:21:13.291 "is_configured": true, 00:21:13.291 "data_offset": 0, 00:21:13.291 "data_size": 65536 00:21:13.291 }, 00:21:13.291 { 00:21:13.291 "name": "BaseBdev2", 00:21:13.291 "uuid": "8b80f5aa-4734-4c3c-b15a-12824aa093d9", 00:21:13.291 "is_configured": true, 00:21:13.291 "data_offset": 0, 00:21:13.291 "data_size": 65536 00:21:13.291 }, 00:21:13.291 { 00:21:13.291 "name": "BaseBdev3", 00:21:13.291 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:13.291 "is_configured": true, 00:21:13.291 "data_offset": 0, 00:21:13.291 "data_size": 65536 00:21:13.291 }, 00:21:13.291 { 00:21:13.291 "name": "BaseBdev4", 00:21:13.291 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:13.291 "is_configured": true, 00:21:13.291 "data_offset": 0, 00:21:13.291 "data_size": 65536 00:21:13.291 } 00:21:13.291 ] 00:21:13.291 }' 00:21:13.291 05:01:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:13.291 05:01:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.291 05:01:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:13.291 05:01:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.291 05:01:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:13.291 [2024-11-18 05:01:36.774188] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:13.550 [2024-11-18 05:01:36.826808] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:13.550 [2024-11-18 05:01:36.890926] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:13.550 [2024-11-18 05:01:36.891138] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:13.550 [2024-11-18 05:01:36.898126] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:13.550 [2024-11-18 05:01:36.907432] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.550 [2024-11-18 05:01:36.930713] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.550 05:01:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.809 05:01:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:13.809 "name": "raid_bdev1", 00:21:13.809 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:13.809 "strip_size_kb": 0, 00:21:13.809 "state": "online", 00:21:13.809 "raid_level": "raid1", 00:21:13.809 "superblock": false, 00:21:13.809 "num_base_bdevs": 4, 00:21:13.809 "num_base_bdevs_discovered": 3, 00:21:13.809 "num_base_bdevs_operational": 3, 00:21:13.809 "base_bdevs_list": [ 00:21:13.809 { 00:21:13.809 "name": null, 00:21:13.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.809 "is_configured": false, 00:21:13.809 "data_offset": 0, 00:21:13.809 "data_size": 65536 00:21:13.809 }, 00:21:13.809 { 00:21:13.809 "name": "BaseBdev2", 00:21:13.809 "uuid": "8b80f5aa-4734-4c3c-b15a-12824aa093d9", 00:21:13.809 "is_configured": true, 00:21:13.809 "data_offset": 0, 00:21:13.809 "data_size": 65536 00:21:13.809 }, 00:21:13.809 { 00:21:13.809 "name": "BaseBdev3", 00:21:13.809 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:13.809 "is_configured": true, 00:21:13.809 "data_offset": 0, 00:21:13.809 "data_size": 65536 00:21:13.809 }, 00:21:13.809 { 00:21:13.809 "name": "BaseBdev4", 00:21:13.809 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:13.809 "is_configured": true, 00:21:13.809 "data_offset": 0, 00:21:13.809 "data_size": 65536 00:21:13.809 } 00:21:13.809 ] 00:21:13.809 }' 00:21:13.809 05:01:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:13.809 05:01:37 -- common/autotest_common.sh@10 -- # set +x 00:21:14.068 05:01:37 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:14.068 05:01:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:14.068 05:01:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:14.068 05:01:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:14.068 05:01:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:14.068 05:01:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.068 05:01:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.326 05:01:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:14.327 "name": "raid_bdev1", 00:21:14.327 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:14.327 "strip_size_kb": 0, 00:21:14.327 "state": "online", 00:21:14.327 "raid_level": "raid1", 00:21:14.327 "superblock": false, 00:21:14.327 "num_base_bdevs": 4, 00:21:14.327 "num_base_bdevs_discovered": 3, 00:21:14.327 "num_base_bdevs_operational": 3, 00:21:14.327 "base_bdevs_list": [ 00:21:14.327 { 00:21:14.327 "name": null, 00:21:14.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.327 "is_configured": false, 00:21:14.327 "data_offset": 0, 00:21:14.327 "data_size": 65536 00:21:14.327 }, 00:21:14.327 { 00:21:14.327 "name": "BaseBdev2", 00:21:14.327 "uuid": "8b80f5aa-4734-4c3c-b15a-12824aa093d9", 00:21:14.327 "is_configured": true, 00:21:14.327 "data_offset": 0, 00:21:14.327 "data_size": 65536 00:21:14.327 }, 00:21:14.327 { 00:21:14.327 "name": "BaseBdev3", 00:21:14.327 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:14.327 "is_configured": true, 00:21:14.327 "data_offset": 0, 00:21:14.327 "data_size": 65536 00:21:14.327 }, 00:21:14.327 { 00:21:14.327 "name": "BaseBdev4", 00:21:14.327 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:14.327 "is_configured": true, 00:21:14.327 "data_offset": 0, 00:21:14.327 "data_size": 65536 00:21:14.327 } 00:21:14.327 ] 00:21:14.327 }' 00:21:14.327 05:01:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:14.327 05:01:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:14.327 05:01:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:14.327 05:01:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:14.327 05:01:37 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:14.586 [2024-11-18 05:01:38.025305] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:14.586 [2024-11-18 05:01:38.025364] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:14.586 05:01:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:14.586 [2024-11-18 05:01:38.086113] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:21:14.586 [2024-11-18 05:01:38.088050] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:14.845 [2024-11-18 05:01:38.212815] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:14.845 [2024-11-18 05:01:38.315787] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:14.845 [2024-11-18 05:01:38.316038] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:15.783 [2024-11-18 05:01:39.026311] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:15.783 [2024-11-18 05:01:39.026843] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:15.783 05:01:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.783 05:01:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:15.783 05:01:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:15.783 05:01:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:15.783 05:01:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:15.783 05:01:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.783 05:01:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.783 [2024-11-18 05:01:39.235789] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:15.783 [2024-11-18 05:01:39.236022] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:16.044 05:01:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:16.044 "name": "raid_bdev1", 00:21:16.044 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:16.044 "strip_size_kb": 0, 00:21:16.044 "state": "online", 00:21:16.044 "raid_level": "raid1", 00:21:16.044 "superblock": false, 00:21:16.044 "num_base_bdevs": 4, 00:21:16.044 "num_base_bdevs_discovered": 4, 00:21:16.044 "num_base_bdevs_operational": 4, 00:21:16.044 "process": { 00:21:16.044 "type": "rebuild", 00:21:16.044 "target": "spare", 00:21:16.044 "progress": { 00:21:16.044 "blocks": 16384, 00:21:16.044 "percent": 25 00:21:16.044 } 00:21:16.044 }, 00:21:16.044 "base_bdevs_list": [ 00:21:16.044 { 00:21:16.044 "name": "spare", 00:21:16.044 "uuid": "88988234-fe66-5f74-ab4b-10ecfedb0470", 00:21:16.044 "is_configured": true, 00:21:16.044 "data_offset": 0, 00:21:16.044 "data_size": 65536 00:21:16.044 }, 00:21:16.044 { 00:21:16.044 "name": "BaseBdev2", 00:21:16.044 "uuid": "8b80f5aa-4734-4c3c-b15a-12824aa093d9", 00:21:16.044 "is_configured": true, 00:21:16.044 "data_offset": 0, 00:21:16.044 "data_size": 65536 00:21:16.044 }, 00:21:16.044 { 00:21:16.044 "name": "BaseBdev3", 00:21:16.044 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:16.044 "is_configured": true, 00:21:16.044 "data_offset": 0, 00:21:16.044 "data_size": 65536 00:21:16.044 }, 00:21:16.044 { 00:21:16.044 "name": "BaseBdev4", 00:21:16.044 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:16.044 "is_configured": true, 00:21:16.044 "data_offset": 0, 00:21:16.044 "data_size": 65536 00:21:16.044 } 00:21:16.044 ] 00:21:16.044 }' 00:21:16.044 05:01:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:16.045 05:01:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.045 05:01:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:16.045 05:01:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.045 05:01:39 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:16.045 05:01:39 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:16.045 05:01:39 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:16.045 05:01:39 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:16.045 05:01:39 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:16.045 [2024-11-18 05:01:39.558844] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.303 [2024-11-18 05:01:39.573283] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005930 00:21:16.303 [2024-11-18 05:01:39.573316] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ad0 00:21:16.303 05:01:39 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:16.303 05:01:39 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:16.303 05:01:39 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.303 05:01:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:16.303 05:01:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:16.303 05:01:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:16.303 05:01:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:16.303 05:01:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.303 05:01:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.303 [2024-11-18 05:01:39.716932] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:16.562 05:01:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:16.563 "name": "raid_bdev1", 00:21:16.563 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:16.563 "strip_size_kb": 0, 00:21:16.563 "state": "online", 00:21:16.563 "raid_level": "raid1", 00:21:16.563 "superblock": false, 00:21:16.563 "num_base_bdevs": 4, 00:21:16.563 "num_base_bdevs_discovered": 3, 00:21:16.563 "num_base_bdevs_operational": 3, 00:21:16.563 "process": { 00:21:16.563 "type": "rebuild", 00:21:16.563 "target": "spare", 00:21:16.563 "progress": { 00:21:16.563 "blocks": 22528, 00:21:16.563 "percent": 34 00:21:16.563 } 00:21:16.563 }, 00:21:16.563 "base_bdevs_list": [ 00:21:16.563 { 00:21:16.563 "name": "spare", 00:21:16.563 "uuid": "88988234-fe66-5f74-ab4b-10ecfedb0470", 00:21:16.563 "is_configured": true, 00:21:16.563 "data_offset": 0, 00:21:16.563 "data_size": 65536 00:21:16.563 }, 00:21:16.563 { 00:21:16.563 "name": null, 00:21:16.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.563 "is_configured": false, 00:21:16.563 "data_offset": 0, 00:21:16.563 "data_size": 65536 00:21:16.563 }, 00:21:16.563 { 00:21:16.563 "name": "BaseBdev3", 00:21:16.563 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:16.563 "is_configured": true, 00:21:16.563 "data_offset": 0, 00:21:16.563 "data_size": 65536 00:21:16.563 }, 00:21:16.563 { 00:21:16.563 "name": "BaseBdev4", 00:21:16.563 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:16.563 "is_configured": true, 00:21:16.563 "data_offset": 0, 00:21:16.563 "data_size": 65536 00:21:16.563 } 00:21:16.563 ] 00:21:16.563 }' 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@657 -- # local timeout=473 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.563 05:01:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.563 [2024-11-18 05:01:40.053276] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:16.822 05:01:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:16.822 "name": "raid_bdev1", 00:21:16.822 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:16.822 "strip_size_kb": 0, 00:21:16.822 "state": "online", 00:21:16.822 "raid_level": "raid1", 00:21:16.822 "superblock": false, 00:21:16.822 "num_base_bdevs": 4, 00:21:16.822 "num_base_bdevs_discovered": 3, 00:21:16.822 "num_base_bdevs_operational": 3, 00:21:16.822 "process": { 00:21:16.822 "type": "rebuild", 00:21:16.822 "target": "spare", 00:21:16.822 "progress": { 00:21:16.822 "blocks": 26624, 00:21:16.822 "percent": 40 00:21:16.822 } 00:21:16.822 }, 00:21:16.822 "base_bdevs_list": [ 00:21:16.822 { 00:21:16.822 "name": "spare", 00:21:16.822 "uuid": "88988234-fe66-5f74-ab4b-10ecfedb0470", 00:21:16.822 "is_configured": true, 00:21:16.822 "data_offset": 0, 00:21:16.822 "data_size": 65536 00:21:16.822 }, 00:21:16.822 { 00:21:16.822 "name": null, 00:21:16.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.822 "is_configured": false, 00:21:16.822 "data_offset": 0, 00:21:16.822 "data_size": 65536 00:21:16.822 }, 00:21:16.822 { 00:21:16.822 "name": "BaseBdev3", 00:21:16.822 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:16.822 "is_configured": true, 00:21:16.822 "data_offset": 0, 00:21:16.822 "data_size": 65536 00:21:16.822 }, 00:21:16.822 { 00:21:16.822 "name": "BaseBdev4", 00:21:16.822 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:16.822 "is_configured": true, 00:21:16.822 "data_offset": 0, 00:21:16.822 "data_size": 65536 00:21:16.822 } 00:21:16.822 ] 00:21:16.822 }' 00:21:16.822 05:01:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:16.822 05:01:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.822 05:01:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:16.822 05:01:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.822 05:01:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:16.822 [2024-11-18 05:01:40.269752] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:16.822 [2024-11-18 05:01:40.270297] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:17.390 [2024-11-18 05:01:40.623502] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:17.390 [2024-11-18 05:01:40.825285] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:17.649 [2024-11-18 05:01:41.056815] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:17.909 05:01:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:17.909 05:01:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.909 05:01:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:17.909 05:01:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:17.909 05:01:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:17.909 05:01:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:17.909 05:01:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.909 05:01:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.168 05:01:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:18.168 "name": "raid_bdev1", 00:21:18.168 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:18.168 "strip_size_kb": 0, 00:21:18.168 "state": "online", 00:21:18.168 "raid_level": "raid1", 00:21:18.168 "superblock": false, 00:21:18.168 "num_base_bdevs": 4, 00:21:18.168 "num_base_bdevs_discovered": 3, 00:21:18.168 "num_base_bdevs_operational": 3, 00:21:18.168 "process": { 00:21:18.168 "type": "rebuild", 00:21:18.168 "target": "spare", 00:21:18.168 "progress": { 00:21:18.168 "blocks": 43008, 00:21:18.168 "percent": 65 00:21:18.168 } 00:21:18.168 }, 00:21:18.168 "base_bdevs_list": [ 00:21:18.168 { 00:21:18.168 "name": "spare", 00:21:18.168 "uuid": "88988234-fe66-5f74-ab4b-10ecfedb0470", 00:21:18.168 "is_configured": true, 00:21:18.168 "data_offset": 0, 00:21:18.168 "data_size": 65536 00:21:18.168 }, 00:21:18.168 { 00:21:18.168 "name": null, 00:21:18.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.168 "is_configured": false, 00:21:18.168 "data_offset": 0, 00:21:18.168 "data_size": 65536 00:21:18.168 }, 00:21:18.168 { 00:21:18.168 "name": "BaseBdev3", 00:21:18.168 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:18.168 "is_configured": true, 00:21:18.168 "data_offset": 0, 00:21:18.168 "data_size": 65536 00:21:18.168 }, 00:21:18.168 { 00:21:18.168 "name": "BaseBdev4", 00:21:18.168 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:18.168 "is_configured": true, 00:21:18.168 "data_offset": 0, 00:21:18.168 "data_size": 65536 00:21:18.168 } 00:21:18.168 ] 00:21:18.168 }' 00:21:18.168 05:01:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:18.168 05:01:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.168 05:01:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:18.168 05:01:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.168 05:01:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:18.428 [2024-11-18 05:01:41.814803] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:18.687 [2024-11-18 05:01:42.029546] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:19.255 "name": "raid_bdev1", 00:21:19.255 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:19.255 "strip_size_kb": 0, 00:21:19.255 "state": "online", 00:21:19.255 "raid_level": "raid1", 00:21:19.255 "superblock": false, 00:21:19.255 "num_base_bdevs": 4, 00:21:19.255 "num_base_bdevs_discovered": 3, 00:21:19.255 "num_base_bdevs_operational": 3, 00:21:19.255 "process": { 00:21:19.255 "type": "rebuild", 00:21:19.255 "target": "spare", 00:21:19.255 "progress": { 00:21:19.255 "blocks": 63488, 00:21:19.255 "percent": 96 00:21:19.255 } 00:21:19.255 }, 00:21:19.255 "base_bdevs_list": [ 00:21:19.255 { 00:21:19.255 "name": "spare", 00:21:19.255 "uuid": "88988234-fe66-5f74-ab4b-10ecfedb0470", 00:21:19.255 "is_configured": true, 00:21:19.255 "data_offset": 0, 00:21:19.255 "data_size": 65536 00:21:19.255 }, 00:21:19.255 { 00:21:19.255 "name": null, 00:21:19.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.255 "is_configured": false, 00:21:19.255 "data_offset": 0, 00:21:19.255 "data_size": 65536 00:21:19.255 }, 00:21:19.255 { 00:21:19.255 "name": "BaseBdev3", 00:21:19.255 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:19.255 "is_configured": true, 00:21:19.255 "data_offset": 0, 00:21:19.255 "data_size": 65536 00:21:19.255 }, 00:21:19.255 { 00:21:19.255 "name": "BaseBdev4", 00:21:19.255 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:19.255 "is_configured": true, 00:21:19.255 "data_offset": 0, 00:21:19.255 "data_size": 65536 00:21:19.255 } 00:21:19.255 ] 00:21:19.255 }' 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.255 05:01:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:19.255 [2024-11-18 05:01:42.759097] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:19.521 [2024-11-18 05:01:42.864860] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:19.521 [2024-11-18 05:01:42.866828] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.533 05:01:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:20.533 05:01:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.533 05:01:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:20.533 05:01:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:20.533 05:01:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:20.533 05:01:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:20.533 05:01:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.533 05:01:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.533 05:01:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:20.533 "name": "raid_bdev1", 00:21:20.533 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:20.533 "strip_size_kb": 0, 00:21:20.534 "state": "online", 00:21:20.534 "raid_level": "raid1", 00:21:20.534 "superblock": false, 00:21:20.534 "num_base_bdevs": 4, 00:21:20.534 "num_base_bdevs_discovered": 3, 00:21:20.534 "num_base_bdevs_operational": 3, 00:21:20.534 "base_bdevs_list": [ 00:21:20.534 { 00:21:20.534 "name": "spare", 00:21:20.534 "uuid": "88988234-fe66-5f74-ab4b-10ecfedb0470", 00:21:20.534 "is_configured": true, 00:21:20.534 "data_offset": 0, 00:21:20.534 "data_size": 65536 00:21:20.534 }, 00:21:20.534 { 00:21:20.534 "name": null, 00:21:20.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.534 "is_configured": false, 00:21:20.534 "data_offset": 0, 00:21:20.534 "data_size": 65536 00:21:20.534 }, 00:21:20.534 { 00:21:20.534 "name": "BaseBdev3", 00:21:20.534 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:20.534 "is_configured": true, 00:21:20.534 "data_offset": 0, 00:21:20.534 "data_size": 65536 00:21:20.534 }, 00:21:20.534 { 00:21:20.534 "name": "BaseBdev4", 00:21:20.534 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:20.534 "is_configured": true, 00:21:20.534 "data_offset": 0, 00:21:20.534 "data_size": 65536 00:21:20.534 } 00:21:20.534 ] 00:21:20.534 }' 00:21:20.534 05:01:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@660 -- # break 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.534 05:01:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:20.793 "name": "raid_bdev1", 00:21:20.793 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:20.793 "strip_size_kb": 0, 00:21:20.793 "state": "online", 00:21:20.793 "raid_level": "raid1", 00:21:20.793 "superblock": false, 00:21:20.793 "num_base_bdevs": 4, 00:21:20.793 "num_base_bdevs_discovered": 3, 00:21:20.793 "num_base_bdevs_operational": 3, 00:21:20.793 "base_bdevs_list": [ 00:21:20.793 { 00:21:20.793 "name": "spare", 00:21:20.793 "uuid": "88988234-fe66-5f74-ab4b-10ecfedb0470", 00:21:20.793 "is_configured": true, 00:21:20.793 "data_offset": 0, 00:21:20.793 "data_size": 65536 00:21:20.793 }, 00:21:20.793 { 00:21:20.793 "name": null, 00:21:20.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.793 "is_configured": false, 00:21:20.793 "data_offset": 0, 00:21:20.793 "data_size": 65536 00:21:20.793 }, 00:21:20.793 { 00:21:20.793 "name": "BaseBdev3", 00:21:20.793 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:20.793 "is_configured": true, 00:21:20.793 "data_offset": 0, 00:21:20.793 "data_size": 65536 00:21:20.793 }, 00:21:20.793 { 00:21:20.793 "name": "BaseBdev4", 00:21:20.793 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:20.793 "is_configured": true, 00:21:20.793 "data_offset": 0, 00:21:20.793 "data_size": 65536 00:21:20.793 } 00:21:20.793 ] 00:21:20.793 }' 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.793 05:01:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.053 05:01:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:21.053 "name": "raid_bdev1", 00:21:21.053 "uuid": "65e91401-6cd9-437b-8685-9b9e2fdc0dd2", 00:21:21.053 "strip_size_kb": 0, 00:21:21.053 "state": "online", 00:21:21.053 "raid_level": "raid1", 00:21:21.053 "superblock": false, 00:21:21.053 "num_base_bdevs": 4, 00:21:21.053 "num_base_bdevs_discovered": 3, 00:21:21.053 "num_base_bdevs_operational": 3, 00:21:21.053 "base_bdevs_list": [ 00:21:21.053 { 00:21:21.053 "name": "spare", 00:21:21.053 "uuid": "88988234-fe66-5f74-ab4b-10ecfedb0470", 00:21:21.053 "is_configured": true, 00:21:21.053 "data_offset": 0, 00:21:21.053 "data_size": 65536 00:21:21.053 }, 00:21:21.053 { 00:21:21.053 "name": null, 00:21:21.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.053 "is_configured": false, 00:21:21.053 "data_offset": 0, 00:21:21.053 "data_size": 65536 00:21:21.053 }, 00:21:21.053 { 00:21:21.053 "name": "BaseBdev3", 00:21:21.053 "uuid": "c9313c90-0749-43f6-9e7b-e6a9f2c6824e", 00:21:21.053 "is_configured": true, 00:21:21.053 "data_offset": 0, 00:21:21.053 "data_size": 65536 00:21:21.053 }, 00:21:21.053 { 00:21:21.053 "name": "BaseBdev4", 00:21:21.053 "uuid": "1c40fefa-00ff-4d1b-a8e4-d54188c16e77", 00:21:21.053 "is_configured": true, 00:21:21.053 "data_offset": 0, 00:21:21.053 "data_size": 65536 00:21:21.053 } 00:21:21.053 ] 00:21:21.053 }' 00:21:21.053 05:01:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:21.053 05:01:44 -- common/autotest_common.sh@10 -- # set +x 00:21:21.312 05:01:44 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:21.571 [2024-11-18 05:01:44.964844] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:21.571 [2024-11-18 05:01:44.964881] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.571 00:21:21.571 Latency(us) 00:21:21.571 [2024-11-18T05:01:45.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.571 [2024-11-18T05:01:45.095Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:21.571 raid_bdev1 : 10.60 102.41 307.23 0.00 0.00 13660.34 243.90 114866.73 00:21:21.571 [2024-11-18T05:01:45.095Z] =================================================================================================================== 00:21:21.571 [2024-11-18T05:01:45.095Z] Total : 102.41 307.23 0.00 0.00 13660.34 243.90 114866.73 00:21:21.571 [2024-11-18 05:01:45.052259] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.571 0 00:21:21.571 [2024-11-18 05:01:45.052466] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.571 [2024-11-18 05:01:45.052566] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.571 [2024-11-18 05:01:45.052589] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:21:21.571 05:01:45 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.571 05:01:45 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:21.830 05:01:45 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:21.830 05:01:45 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:21.830 05:01:45 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:21.830 05:01:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.830 05:01:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:21.830 05:01:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:21.830 05:01:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:21.830 05:01:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:21.830 05:01:45 -- bdev/nbd_common.sh@12 -- # local i 00:21:21.830 05:01:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:21.830 05:01:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.830 05:01:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:22.089 /dev/nbd0 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:22.089 05:01:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:22.089 05:01:45 -- common/autotest_common.sh@867 -- # local i 00:21:22.089 05:01:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:22.089 05:01:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:22.089 05:01:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:22.089 05:01:45 -- common/autotest_common.sh@871 -- # break 00:21:22.089 05:01:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:22.089 05:01:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:22.089 05:01:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:22.089 1+0 records in 00:21:22.089 1+0 records out 00:21:22.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185988 s, 22.0 MB/s 00:21:22.089 05:01:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.089 05:01:45 -- common/autotest_common.sh@884 -- # size=4096 00:21:22.089 05:01:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.089 05:01:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:22.089 05:01:45 -- common/autotest_common.sh@887 -- # return 0 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:22.089 05:01:45 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:22.089 05:01:45 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:21:22.089 05:01:45 -- bdev/bdev_raid.sh@678 -- # continue 00:21:22.089 05:01:45 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:22.089 05:01:45 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:21:22.089 05:01:45 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@12 -- # local i 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:22.089 05:01:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:22.349 /dev/nbd1 00:21:22.349 05:01:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:22.349 05:01:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:22.349 05:01:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:22.349 05:01:45 -- common/autotest_common.sh@867 -- # local i 00:21:22.349 05:01:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:22.349 05:01:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:22.349 05:01:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:22.349 05:01:45 -- common/autotest_common.sh@871 -- # break 00:21:22.349 05:01:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:22.349 05:01:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:22.349 05:01:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:22.349 1+0 records in 00:21:22.349 1+0 records out 00:21:22.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032249 s, 12.7 MB/s 00:21:22.349 05:01:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.609 05:01:45 -- common/autotest_common.sh@884 -- # size=4096 00:21:22.609 05:01:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.609 05:01:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:22.609 05:01:45 -- common/autotest_common.sh@887 -- # return 0 00:21:22.609 05:01:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:22.609 05:01:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:22.609 05:01:45 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:22.609 05:01:46 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:22.609 05:01:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:22.609 05:01:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:22.609 05:01:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:22.609 05:01:46 -- bdev/nbd_common.sh@51 -- # local i 00:21:22.609 05:01:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.609 05:01:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@41 -- # break 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.869 05:01:46 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:22.869 05:01:46 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:21:22.869 05:01:46 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@12 -- # local i 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:22.869 05:01:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:23.128 /dev/nbd1 00:21:23.128 05:01:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:23.128 05:01:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:23.128 05:01:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:23.128 05:01:46 -- common/autotest_common.sh@867 -- # local i 00:21:23.129 05:01:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:23.129 05:01:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:23.129 05:01:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:23.129 05:01:46 -- common/autotest_common.sh@871 -- # break 00:21:23.129 05:01:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:23.129 05:01:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:23.129 05:01:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:23.129 1+0 records in 00:21:23.129 1+0 records out 00:21:23.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294234 s, 13.9 MB/s 00:21:23.129 05:01:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.129 05:01:46 -- common/autotest_common.sh@884 -- # size=4096 00:21:23.129 05:01:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.129 05:01:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:23.129 05:01:46 -- common/autotest_common.sh@887 -- # return 0 00:21:23.129 05:01:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:23.129 05:01:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:23.129 05:01:46 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:23.129 05:01:46 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:23.129 05:01:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:23.129 05:01:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:23.129 05:01:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:23.129 05:01:46 -- bdev/nbd_common.sh@51 -- # local i 00:21:23.129 05:01:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:23.129 05:01:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@41 -- # break 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@45 -- # return 0 00:21:23.388 05:01:46 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@51 -- # local i 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:23.388 05:01:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:23.648 05:01:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:23.648 05:01:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:23.648 05:01:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:23.648 05:01:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:23.648 05:01:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:23.648 05:01:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:23.648 05:01:47 -- bdev/nbd_common.sh@41 -- # break 00:21:23.648 05:01:47 -- bdev/nbd_common.sh@45 -- # return 0 00:21:23.648 05:01:47 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:23.648 05:01:47 -- bdev/bdev_raid.sh@709 -- # killprocess 81325 00:21:23.648 05:01:47 -- common/autotest_common.sh@936 -- # '[' -z 81325 ']' 00:21:23.648 05:01:47 -- common/autotest_common.sh@940 -- # kill -0 81325 00:21:23.648 05:01:47 -- common/autotest_common.sh@941 -- # uname 00:21:23.648 05:01:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:23.648 05:01:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81325 00:21:23.648 killing process with pid 81325 00:21:23.648 Received shutdown signal, test time was about 12.618423 seconds 00:21:23.648 00:21:23.648 Latency(us) 00:21:23.648 [2024-11-18T05:01:47.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.648 [2024-11-18T05:01:47.172Z] =================================================================================================================== 00:21:23.648 [2024-11-18T05:01:47.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.648 05:01:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:23.648 05:01:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:23.648 05:01:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81325' 00:21:23.648 05:01:47 -- common/autotest_common.sh@955 -- # kill 81325 00:21:23.648 [2024-11-18 05:01:47.050816] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:23.648 05:01:47 -- common/autotest_common.sh@960 -- # wait 81325 00:21:23.907 [2024-11-18 05:01:47.324478] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:24.846 00:21:24.846 real 0m17.569s 00:21:24.846 user 0m25.428s 00:21:24.846 sys 0m2.258s 00:21:24.846 05:01:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:24.846 ************************************ 00:21:24.846 END TEST raid_rebuild_test_io 00:21:24.846 ************************************ 00:21:24.846 05:01:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:21:24.846 05:01:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:24.846 05:01:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:24.846 05:01:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.846 ************************************ 00:21:24.846 START TEST raid_rebuild_test_sb_io 00:21:24.846 ************************************ 00:21:24.846 05:01:48 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true true 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@544 -- # raid_pid=81804 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@545 -- # waitforlisten 81804 /var/tmp/spdk-raid.sock 00:21:24.846 05:01:48 -- common/autotest_common.sh@829 -- # '[' -z 81804 ']' 00:21:24.846 05:01:48 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:24.846 05:01:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:24.846 05:01:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.846 05:01:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:24.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:24.846 05:01:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.846 05:01:48 -- common/autotest_common.sh@10 -- # set +x 00:21:25.105 [2024-11-18 05:01:48.391472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:25.105 [2024-11-18 05:01:48.391865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81804 ] 00:21:25.105 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:25.105 Zero copy mechanism will not be used. 00:21:25.105 [2024-11-18 05:01:48.561764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.364 [2024-11-18 05:01:48.712217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.364 [2024-11-18 05:01:48.855005] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:25.931 05:01:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.931 05:01:49 -- common/autotest_common.sh@862 -- # return 0 00:21:25.931 05:01:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:25.931 05:01:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:25.931 05:01:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:26.189 BaseBdev1_malloc 00:21:26.189 05:01:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:26.189 [2024-11-18 05:01:49.662239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:26.189 [2024-11-18 05:01:49.662501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.189 [2024-11-18 05:01:49.662546] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:21:26.189 [2024-11-18 05:01:49.662586] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.189 [2024-11-18 05:01:49.664741] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.189 [2024-11-18 05:01:49.664783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:26.189 BaseBdev1 00:21:26.189 05:01:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:26.189 05:01:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:26.189 05:01:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:26.448 BaseBdev2_malloc 00:21:26.448 05:01:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:26.707 [2024-11-18 05:01:50.075609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:26.707 [2024-11-18 05:01:50.075685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.707 [2024-11-18 05:01:50.075720] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:21:26.707 [2024-11-18 05:01:50.075737] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.707 [2024-11-18 05:01:50.077927] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.707 [2024-11-18 05:01:50.077970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:26.707 BaseBdev2 00:21:26.707 05:01:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:26.707 05:01:50 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:26.707 05:01:50 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:26.967 BaseBdev3_malloc 00:21:26.967 05:01:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:27.225 [2024-11-18 05:01:50.575822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:27.225 [2024-11-18 05:01:50.576031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.225 [2024-11-18 05:01:50.576067] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:21:27.225 [2024-11-18 05:01:50.576083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.225 [2024-11-18 05:01:50.578371] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.225 [2024-11-18 05:01:50.578428] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:27.225 BaseBdev3 00:21:27.225 05:01:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:27.225 05:01:50 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:27.225 05:01:50 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:27.484 BaseBdev4_malloc 00:21:27.484 05:01:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:27.484 [2024-11-18 05:01:50.968081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:27.484 [2024-11-18 05:01:50.968163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.484 [2024-11-18 05:01:50.968194] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:21:27.484 [2024-11-18 05:01:50.968398] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.484 [2024-11-18 05:01:50.970829] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.484 [2024-11-18 05:01:50.970905] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:27.484 BaseBdev4 00:21:27.484 05:01:50 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:27.742 spare_malloc 00:21:27.742 05:01:51 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:28.000 spare_delay 00:21:28.000 05:01:51 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:28.259 [2024-11-18 05:01:51.596073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:28.259 [2024-11-18 05:01:51.596326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.259 [2024-11-18 05:01:51.596397] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:21:28.259 [2024-11-18 05:01:51.596639] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.259 [2024-11-18 05:01:51.598849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.259 [2024-11-18 05:01:51.599044] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:28.259 spare 00:21:28.259 05:01:51 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:28.259 [2024-11-18 05:01:51.776146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.259 [2024-11-18 05:01:51.778199] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.259 [2024-11-18 05:01:51.778451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:28.259 [2024-11-18 05:01:51.778560] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:28.259 [2024-11-18 05:01:51.778875] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:21:28.259 [2024-11-18 05:01:51.778930] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:28.260 [2024-11-18 05:01:51.779140] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:21:28.260 [2024-11-18 05:01:51.779696] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:21:28.260 [2024-11-18 05:01:51.779857] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:21:28.260 [2024-11-18 05:01:51.780192] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:28.519 "name": "raid_bdev1", 00:21:28.519 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:28.519 "strip_size_kb": 0, 00:21:28.519 "state": "online", 00:21:28.519 "raid_level": "raid1", 00:21:28.519 "superblock": true, 00:21:28.519 "num_base_bdevs": 4, 00:21:28.519 "num_base_bdevs_discovered": 4, 00:21:28.519 "num_base_bdevs_operational": 4, 00:21:28.519 "base_bdevs_list": [ 00:21:28.519 { 00:21:28.519 "name": "BaseBdev1", 00:21:28.519 "uuid": "88965772-c28d-5b78-8242-70bf818bae92", 00:21:28.519 "is_configured": true, 00:21:28.519 "data_offset": 2048, 00:21:28.519 "data_size": 63488 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "name": "BaseBdev2", 00:21:28.519 "uuid": "7fdee842-3394-5cd4-85ff-162475c8c482", 00:21:28.519 "is_configured": true, 00:21:28.519 "data_offset": 2048, 00:21:28.519 "data_size": 63488 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "name": "BaseBdev3", 00:21:28.519 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:28.519 "is_configured": true, 00:21:28.519 "data_offset": 2048, 00:21:28.519 "data_size": 63488 00:21:28.519 }, 00:21:28.519 { 00:21:28.519 "name": "BaseBdev4", 00:21:28.519 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:28.519 "is_configured": true, 00:21:28.519 "data_offset": 2048, 00:21:28.519 "data_size": 63488 00:21:28.519 } 00:21:28.519 ] 00:21:28.519 }' 00:21:28.519 05:01:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:28.519 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:21:28.777 05:01:52 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:28.777 05:01:52 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:29.036 [2024-11-18 05:01:52.468486] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:29.036 05:01:52 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:29.036 05:01:52 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.036 05:01:52 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:29.295 05:01:52 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:29.295 05:01:52 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:29.295 05:01:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:29.295 05:01:52 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:29.295 [2024-11-18 05:01:52.762054] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:21:29.295 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:29.295 Zero copy mechanism will not be used. 00:21:29.295 Running I/O for 60 seconds... 00:21:29.554 [2024-11-18 05:01:52.893667] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:29.554 [2024-11-18 05:01:52.905486] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.554 05:01:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.813 05:01:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:29.813 "name": "raid_bdev1", 00:21:29.813 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:29.813 "strip_size_kb": 0, 00:21:29.813 "state": "online", 00:21:29.813 "raid_level": "raid1", 00:21:29.813 "superblock": true, 00:21:29.813 "num_base_bdevs": 4, 00:21:29.813 "num_base_bdevs_discovered": 3, 00:21:29.813 "num_base_bdevs_operational": 3, 00:21:29.813 "base_bdevs_list": [ 00:21:29.813 { 00:21:29.813 "name": null, 00:21:29.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.813 "is_configured": false, 00:21:29.813 "data_offset": 2048, 00:21:29.813 "data_size": 63488 00:21:29.813 }, 00:21:29.813 { 00:21:29.813 "name": "BaseBdev2", 00:21:29.813 "uuid": "7fdee842-3394-5cd4-85ff-162475c8c482", 00:21:29.813 "is_configured": true, 00:21:29.813 "data_offset": 2048, 00:21:29.813 "data_size": 63488 00:21:29.813 }, 00:21:29.813 { 00:21:29.813 "name": "BaseBdev3", 00:21:29.813 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:29.813 "is_configured": true, 00:21:29.813 "data_offset": 2048, 00:21:29.813 "data_size": 63488 00:21:29.813 }, 00:21:29.813 { 00:21:29.813 "name": "BaseBdev4", 00:21:29.813 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:29.813 "is_configured": true, 00:21:29.813 "data_offset": 2048, 00:21:29.813 "data_size": 63488 00:21:29.813 } 00:21:29.813 ] 00:21:29.813 }' 00:21:29.813 05:01:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:29.813 05:01:53 -- common/autotest_common.sh@10 -- # set +x 00:21:30.072 05:01:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:30.331 [2024-11-18 05:01:53.649098] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:30.331 [2024-11-18 05:01:53.649155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:30.331 05:01:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:30.331 [2024-11-18 05:01:53.691257] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:21:30.331 [2024-11-18 05:01:53.693185] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:30.331 [2024-11-18 05:01:53.809574] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:30.331 [2024-11-18 05:01:53.810927] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:30.590 [2024-11-18 05:01:54.050555] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:30.590 [2024-11-18 05:01:54.051258] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:31.159 [2024-11-18 05:01:54.538453] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:31.418 05:01:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.418 05:01:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:31.418 05:01:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:31.418 05:01:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:31.418 05:01:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:31.418 05:01:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.418 05:01:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.678 05:01:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:31.678 "name": "raid_bdev1", 00:21:31.678 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:31.678 "strip_size_kb": 0, 00:21:31.678 "state": "online", 00:21:31.678 "raid_level": "raid1", 00:21:31.678 "superblock": true, 00:21:31.678 "num_base_bdevs": 4, 00:21:31.678 "num_base_bdevs_discovered": 4, 00:21:31.678 "num_base_bdevs_operational": 4, 00:21:31.678 "process": { 00:21:31.678 "type": "rebuild", 00:21:31.678 "target": "spare", 00:21:31.678 "progress": { 00:21:31.678 "blocks": 14336, 00:21:31.678 "percent": 22 00:21:31.678 } 00:21:31.678 }, 00:21:31.678 "base_bdevs_list": [ 00:21:31.678 { 00:21:31.678 "name": "spare", 00:21:31.678 "uuid": "fd2e2403-57d2-5394-81f0-bab0348b814e", 00:21:31.678 "is_configured": true, 00:21:31.678 "data_offset": 2048, 00:21:31.678 "data_size": 63488 00:21:31.678 }, 00:21:31.678 { 00:21:31.678 "name": "BaseBdev2", 00:21:31.678 "uuid": "7fdee842-3394-5cd4-85ff-162475c8c482", 00:21:31.678 "is_configured": true, 00:21:31.678 "data_offset": 2048, 00:21:31.678 "data_size": 63488 00:21:31.678 }, 00:21:31.678 { 00:21:31.678 "name": "BaseBdev3", 00:21:31.678 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:31.678 "is_configured": true, 00:21:31.678 "data_offset": 2048, 00:21:31.678 "data_size": 63488 00:21:31.678 }, 00:21:31.678 { 00:21:31.678 "name": "BaseBdev4", 00:21:31.678 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:31.678 "is_configured": true, 00:21:31.678 "data_offset": 2048, 00:21:31.678 "data_size": 63488 00:21:31.678 } 00:21:31.678 ] 00:21:31.678 }' 00:21:31.678 05:01:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.678 05:01:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.678 05:01:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.678 05:01:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.678 05:01:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:31.678 [2024-11-18 05:01:55.193864] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:31.938 [2024-11-18 05:01:55.205084] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:31.938 [2024-11-18 05:01:55.302205] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:31.938 [2024-11-18 05:01:55.409274] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:31.938 [2024-11-18 05:01:55.425068] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.938 [2024-11-18 05:01:55.448585] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.197 05:01:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.456 05:01:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:32.456 "name": "raid_bdev1", 00:21:32.456 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:32.456 "strip_size_kb": 0, 00:21:32.456 "state": "online", 00:21:32.456 "raid_level": "raid1", 00:21:32.456 "superblock": true, 00:21:32.456 "num_base_bdevs": 4, 00:21:32.456 "num_base_bdevs_discovered": 3, 00:21:32.456 "num_base_bdevs_operational": 3, 00:21:32.456 "base_bdevs_list": [ 00:21:32.456 { 00:21:32.456 "name": null, 00:21:32.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.456 "is_configured": false, 00:21:32.456 "data_offset": 2048, 00:21:32.456 "data_size": 63488 00:21:32.456 }, 00:21:32.456 { 00:21:32.456 "name": "BaseBdev2", 00:21:32.456 "uuid": "7fdee842-3394-5cd4-85ff-162475c8c482", 00:21:32.456 "is_configured": true, 00:21:32.456 "data_offset": 2048, 00:21:32.456 "data_size": 63488 00:21:32.456 }, 00:21:32.456 { 00:21:32.456 "name": "BaseBdev3", 00:21:32.456 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:32.456 "is_configured": true, 00:21:32.456 "data_offset": 2048, 00:21:32.456 "data_size": 63488 00:21:32.456 }, 00:21:32.456 { 00:21:32.456 "name": "BaseBdev4", 00:21:32.456 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:32.456 "is_configured": true, 00:21:32.456 "data_offset": 2048, 00:21:32.456 "data_size": 63488 00:21:32.456 } 00:21:32.456 ] 00:21:32.456 }' 00:21:32.456 05:01:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:32.456 05:01:55 -- common/autotest_common.sh@10 -- # set +x 00:21:32.715 05:01:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:32.715 05:01:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:32.715 05:01:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:32.715 05:01:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:32.715 05:01:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:32.715 05:01:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.715 05:01:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.975 05:01:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:32.975 "name": "raid_bdev1", 00:21:32.975 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:32.975 "strip_size_kb": 0, 00:21:32.975 "state": "online", 00:21:32.975 "raid_level": "raid1", 00:21:32.975 "superblock": true, 00:21:32.975 "num_base_bdevs": 4, 00:21:32.975 "num_base_bdevs_discovered": 3, 00:21:32.975 "num_base_bdevs_operational": 3, 00:21:32.975 "base_bdevs_list": [ 00:21:32.975 { 00:21:32.975 "name": null, 00:21:32.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.975 "is_configured": false, 00:21:32.975 "data_offset": 2048, 00:21:32.975 "data_size": 63488 00:21:32.975 }, 00:21:32.975 { 00:21:32.975 "name": "BaseBdev2", 00:21:32.975 "uuid": "7fdee842-3394-5cd4-85ff-162475c8c482", 00:21:32.975 "is_configured": true, 00:21:32.975 "data_offset": 2048, 00:21:32.975 "data_size": 63488 00:21:32.975 }, 00:21:32.975 { 00:21:32.975 "name": "BaseBdev3", 00:21:32.975 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:32.975 "is_configured": true, 00:21:32.975 "data_offset": 2048, 00:21:32.975 "data_size": 63488 00:21:32.975 }, 00:21:32.975 { 00:21:32.975 "name": "BaseBdev4", 00:21:32.975 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:32.975 "is_configured": true, 00:21:32.975 "data_offset": 2048, 00:21:32.975 "data_size": 63488 00:21:32.975 } 00:21:32.975 ] 00:21:32.975 }' 00:21:32.975 05:01:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:32.975 05:01:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:32.975 05:01:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:32.975 05:01:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:32.975 05:01:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:33.234 [2024-11-18 05:01:56.562350] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:33.234 [2024-11-18 05:01:56.562661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:33.234 05:01:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:33.234 [2024-11-18 05:01:56.617611] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:21:33.234 [2024-11-18 05:01:56.619645] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:33.234 [2024-11-18 05:01:56.734987] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:33.234 [2024-11-18 05:01:56.735454] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:33.493 [2024-11-18 05:01:56.857127] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:33.493 [2024-11-18 05:01:56.857345] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:34.429 05:01:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.429 05:01:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:34.429 05:01:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:34.429 05:01:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:34.429 05:01:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:34.429 05:01:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.429 05:01:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.429 [2024-11-18 05:01:57.678998] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:34.430 [2024-11-18 05:01:57.679538] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:34.430 "name": "raid_bdev1", 00:21:34.430 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:34.430 "strip_size_kb": 0, 00:21:34.430 "state": "online", 00:21:34.430 "raid_level": "raid1", 00:21:34.430 "superblock": true, 00:21:34.430 "num_base_bdevs": 4, 00:21:34.430 "num_base_bdevs_discovered": 4, 00:21:34.430 "num_base_bdevs_operational": 4, 00:21:34.430 "process": { 00:21:34.430 "type": "rebuild", 00:21:34.430 "target": "spare", 00:21:34.430 "progress": { 00:21:34.430 "blocks": 16384, 00:21:34.430 "percent": 25 00:21:34.430 } 00:21:34.430 }, 00:21:34.430 "base_bdevs_list": [ 00:21:34.430 { 00:21:34.430 "name": "spare", 00:21:34.430 "uuid": "fd2e2403-57d2-5394-81f0-bab0348b814e", 00:21:34.430 "is_configured": true, 00:21:34.430 "data_offset": 2048, 00:21:34.430 "data_size": 63488 00:21:34.430 }, 00:21:34.430 { 00:21:34.430 "name": "BaseBdev2", 00:21:34.430 "uuid": "7fdee842-3394-5cd4-85ff-162475c8c482", 00:21:34.430 "is_configured": true, 00:21:34.430 "data_offset": 2048, 00:21:34.430 "data_size": 63488 00:21:34.430 }, 00:21:34.430 { 00:21:34.430 "name": "BaseBdev3", 00:21:34.430 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:34.430 "is_configured": true, 00:21:34.430 "data_offset": 2048, 00:21:34.430 "data_size": 63488 00:21:34.430 }, 00:21:34.430 { 00:21:34.430 "name": "BaseBdev4", 00:21:34.430 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:34.430 "is_configured": true, 00:21:34.430 "data_offset": 2048, 00:21:34.430 "data_size": 63488 00:21:34.430 } 00:21:34.430 ] 00:21:34.430 }' 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:34.430 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:34.430 05:01:57 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:34.689 [2024-11-18 05:01:57.986545] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:34.689 [2024-11-18 05:01:58.007630] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:34.689 [2024-11-18 05:01:58.124796] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005930 00:21:34.689 [2024-11-18 05:01:58.124993] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ad0 00:21:34.948 [2024-11-18 05:01:58.244155] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:34.948 05:01:58 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:34.948 05:01:58 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:34.948 05:01:58 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.948 05:01:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:34.948 05:01:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:34.948 05:01:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:34.948 05:01:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:34.948 05:01:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.948 05:01:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.206 05:01:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:35.206 "name": "raid_bdev1", 00:21:35.206 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:35.206 "strip_size_kb": 0, 00:21:35.206 "state": "online", 00:21:35.206 "raid_level": "raid1", 00:21:35.206 "superblock": true, 00:21:35.206 "num_base_bdevs": 4, 00:21:35.206 "num_base_bdevs_discovered": 3, 00:21:35.206 "num_base_bdevs_operational": 3, 00:21:35.206 "process": { 00:21:35.207 "type": "rebuild", 00:21:35.207 "target": "spare", 00:21:35.207 "progress": { 00:21:35.207 "blocks": 24576, 00:21:35.207 "percent": 38 00:21:35.207 } 00:21:35.207 }, 00:21:35.207 "base_bdevs_list": [ 00:21:35.207 { 00:21:35.207 "name": "spare", 00:21:35.207 "uuid": "fd2e2403-57d2-5394-81f0-bab0348b814e", 00:21:35.207 "is_configured": true, 00:21:35.207 "data_offset": 2048, 00:21:35.207 "data_size": 63488 00:21:35.207 }, 00:21:35.207 { 00:21:35.207 "name": null, 00:21:35.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.207 "is_configured": false, 00:21:35.207 "data_offset": 2048, 00:21:35.207 "data_size": 63488 00:21:35.207 }, 00:21:35.207 { 00:21:35.207 "name": "BaseBdev3", 00:21:35.207 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:35.207 "is_configured": true, 00:21:35.207 "data_offset": 2048, 00:21:35.207 "data_size": 63488 00:21:35.207 }, 00:21:35.207 { 00:21:35.207 "name": "BaseBdev4", 00:21:35.207 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:35.207 "is_configured": true, 00:21:35.207 "data_offset": 2048, 00:21:35.207 "data_size": 63488 00:21:35.207 } 00:21:35.207 ] 00:21:35.207 }' 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@657 -- # local timeout=492 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.207 05:01:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.207 [2024-11-18 05:01:58.697238] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:35.465 05:01:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:35.465 "name": "raid_bdev1", 00:21:35.465 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:35.465 "strip_size_kb": 0, 00:21:35.465 "state": "online", 00:21:35.465 "raid_level": "raid1", 00:21:35.465 "superblock": true, 00:21:35.465 "num_base_bdevs": 4, 00:21:35.465 "num_base_bdevs_discovered": 3, 00:21:35.465 "num_base_bdevs_operational": 3, 00:21:35.465 "process": { 00:21:35.465 "type": "rebuild", 00:21:35.465 "target": "spare", 00:21:35.465 "progress": { 00:21:35.465 "blocks": 28672, 00:21:35.465 "percent": 45 00:21:35.465 } 00:21:35.465 }, 00:21:35.465 "base_bdevs_list": [ 00:21:35.465 { 00:21:35.465 "name": "spare", 00:21:35.465 "uuid": "fd2e2403-57d2-5394-81f0-bab0348b814e", 00:21:35.465 "is_configured": true, 00:21:35.465 "data_offset": 2048, 00:21:35.465 "data_size": 63488 00:21:35.465 }, 00:21:35.465 { 00:21:35.465 "name": null, 00:21:35.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.465 "is_configured": false, 00:21:35.465 "data_offset": 2048, 00:21:35.465 "data_size": 63488 00:21:35.465 }, 00:21:35.465 { 00:21:35.465 "name": "BaseBdev3", 00:21:35.465 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:35.465 "is_configured": true, 00:21:35.465 "data_offset": 2048, 00:21:35.465 "data_size": 63488 00:21:35.465 }, 00:21:35.465 { 00:21:35.465 "name": "BaseBdev4", 00:21:35.465 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:35.465 "is_configured": true, 00:21:35.465 "data_offset": 2048, 00:21:35.465 "data_size": 63488 00:21:35.465 } 00:21:35.465 ] 00:21:35.465 }' 00:21:35.465 05:01:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:35.465 05:01:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.465 05:01:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.465 05:01:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.465 05:01:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:35.465 [2024-11-18 05:01:58.933927] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:36.033 [2024-11-18 05:01:59.506894] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:36.291 05:01:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:36.291 05:01:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.291 05:01:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:36.291 05:01:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:36.291 05:01:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:36.291 05:01:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:36.291 05:01:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.291 05:01:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.549 [2024-11-18 05:01:59.951960] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:36.549 05:01:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:36.549 "name": "raid_bdev1", 00:21:36.549 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:36.549 "strip_size_kb": 0, 00:21:36.549 "state": "online", 00:21:36.549 "raid_level": "raid1", 00:21:36.549 "superblock": true, 00:21:36.549 "num_base_bdevs": 4, 00:21:36.549 "num_base_bdevs_discovered": 3, 00:21:36.549 "num_base_bdevs_operational": 3, 00:21:36.549 "process": { 00:21:36.549 "type": "rebuild", 00:21:36.549 "target": "spare", 00:21:36.549 "progress": { 00:21:36.549 "blocks": 45056, 00:21:36.549 "percent": 70 00:21:36.549 } 00:21:36.549 }, 00:21:36.549 "base_bdevs_list": [ 00:21:36.549 { 00:21:36.549 "name": "spare", 00:21:36.549 "uuid": "fd2e2403-57d2-5394-81f0-bab0348b814e", 00:21:36.549 "is_configured": true, 00:21:36.549 "data_offset": 2048, 00:21:36.549 "data_size": 63488 00:21:36.549 }, 00:21:36.549 { 00:21:36.549 "name": null, 00:21:36.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.549 "is_configured": false, 00:21:36.549 "data_offset": 2048, 00:21:36.549 "data_size": 63488 00:21:36.549 }, 00:21:36.549 { 00:21:36.549 "name": "BaseBdev3", 00:21:36.549 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:36.550 "is_configured": true, 00:21:36.550 "data_offset": 2048, 00:21:36.550 "data_size": 63488 00:21:36.550 }, 00:21:36.550 { 00:21:36.550 "name": "BaseBdev4", 00:21:36.550 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:36.550 "is_configured": true, 00:21:36.550 "data_offset": 2048, 00:21:36.550 "data_size": 63488 00:21:36.550 } 00:21:36.550 ] 00:21:36.550 }' 00:21:36.550 05:01:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:36.550 05:01:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:36.550 05:01:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:36.550 05:01:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:36.550 05:01:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:36.808 [2024-11-18 05:02:00.183155] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:36.808 [2024-11-18 05:02:00.291751] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:37.068 [2024-11-18 05:02:00.514845] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:37.326 [2024-11-18 05:02:00.730297] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:37.585 05:02:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:37.585 05:02:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.585 05:02:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:37.585 05:02:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:37.585 05:02:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:37.585 05:02:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:37.585 05:02:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.585 05:02:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.585 [2024-11-18 05:02:01.059387] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:37.845 [2024-11-18 05:02:01.159373] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:37.845 [2024-11-18 05:02:01.161150] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:37.845 "name": "raid_bdev1", 00:21:37.845 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:37.845 "strip_size_kb": 0, 00:21:37.845 "state": "online", 00:21:37.845 "raid_level": "raid1", 00:21:37.845 "superblock": true, 00:21:37.845 "num_base_bdevs": 4, 00:21:37.845 "num_base_bdevs_discovered": 3, 00:21:37.845 "num_base_bdevs_operational": 3, 00:21:37.845 "base_bdevs_list": [ 00:21:37.845 { 00:21:37.845 "name": "spare", 00:21:37.845 "uuid": "fd2e2403-57d2-5394-81f0-bab0348b814e", 00:21:37.845 "is_configured": true, 00:21:37.845 "data_offset": 2048, 00:21:37.845 "data_size": 63488 00:21:37.845 }, 00:21:37.845 { 00:21:37.845 "name": null, 00:21:37.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.845 "is_configured": false, 00:21:37.845 "data_offset": 2048, 00:21:37.845 "data_size": 63488 00:21:37.845 }, 00:21:37.845 { 00:21:37.845 "name": "BaseBdev3", 00:21:37.845 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:37.845 "is_configured": true, 00:21:37.845 "data_offset": 2048, 00:21:37.845 "data_size": 63488 00:21:37.845 }, 00:21:37.845 { 00:21:37.845 "name": "BaseBdev4", 00:21:37.845 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:37.845 "is_configured": true, 00:21:37.845 "data_offset": 2048, 00:21:37.845 "data_size": 63488 00:21:37.845 } 00:21:37.845 ] 00:21:37.845 }' 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@660 -- # break 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.845 05:02:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:38.105 "name": "raid_bdev1", 00:21:38.105 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:38.105 "strip_size_kb": 0, 00:21:38.105 "state": "online", 00:21:38.105 "raid_level": "raid1", 00:21:38.105 "superblock": true, 00:21:38.105 "num_base_bdevs": 4, 00:21:38.105 "num_base_bdevs_discovered": 3, 00:21:38.105 "num_base_bdevs_operational": 3, 00:21:38.105 "base_bdevs_list": [ 00:21:38.105 { 00:21:38.105 "name": "spare", 00:21:38.105 "uuid": "fd2e2403-57d2-5394-81f0-bab0348b814e", 00:21:38.105 "is_configured": true, 00:21:38.105 "data_offset": 2048, 00:21:38.105 "data_size": 63488 00:21:38.105 }, 00:21:38.105 { 00:21:38.105 "name": null, 00:21:38.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.105 "is_configured": false, 00:21:38.105 "data_offset": 2048, 00:21:38.105 "data_size": 63488 00:21:38.105 }, 00:21:38.105 { 00:21:38.105 "name": "BaseBdev3", 00:21:38.105 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:38.105 "is_configured": true, 00:21:38.105 "data_offset": 2048, 00:21:38.105 "data_size": 63488 00:21:38.105 }, 00:21:38.105 { 00:21:38.105 "name": "BaseBdev4", 00:21:38.105 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:38.105 "is_configured": true, 00:21:38.105 "data_offset": 2048, 00:21:38.105 "data_size": 63488 00:21:38.105 } 00:21:38.105 ] 00:21:38.105 }' 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.105 05:02:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.364 05:02:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.364 "name": "raid_bdev1", 00:21:38.364 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:38.364 "strip_size_kb": 0, 00:21:38.364 "state": "online", 00:21:38.364 "raid_level": "raid1", 00:21:38.364 "superblock": true, 00:21:38.364 "num_base_bdevs": 4, 00:21:38.364 "num_base_bdevs_discovered": 3, 00:21:38.364 "num_base_bdevs_operational": 3, 00:21:38.364 "base_bdevs_list": [ 00:21:38.364 { 00:21:38.364 "name": "spare", 00:21:38.364 "uuid": "fd2e2403-57d2-5394-81f0-bab0348b814e", 00:21:38.364 "is_configured": true, 00:21:38.364 "data_offset": 2048, 00:21:38.364 "data_size": 63488 00:21:38.364 }, 00:21:38.364 { 00:21:38.364 "name": null, 00:21:38.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.364 "is_configured": false, 00:21:38.364 "data_offset": 2048, 00:21:38.364 "data_size": 63488 00:21:38.364 }, 00:21:38.364 { 00:21:38.364 "name": "BaseBdev3", 00:21:38.364 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:38.364 "is_configured": true, 00:21:38.364 "data_offset": 2048, 00:21:38.364 "data_size": 63488 00:21:38.364 }, 00:21:38.364 { 00:21:38.364 "name": "BaseBdev4", 00:21:38.364 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:38.364 "is_configured": true, 00:21:38.364 "data_offset": 2048, 00:21:38.364 "data_size": 63488 00:21:38.364 } 00:21:38.364 ] 00:21:38.364 }' 00:21:38.364 05:02:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.364 05:02:01 -- common/autotest_common.sh@10 -- # set +x 00:21:38.623 05:02:02 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:38.882 [2024-11-18 05:02:02.367823] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:38.882 [2024-11-18 05:02:02.368087] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:39.141 00:21:39.141 Latency(us) 00:21:39.141 [2024-11-18T05:02:02.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.141 [2024-11-18T05:02:02.665Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:39.141 raid_bdev1 : 9.70 98.70 296.11 0.00 0.00 14787.32 249.48 115819.99 00:21:39.141 [2024-11-18T05:02:02.665Z] =================================================================================================================== 00:21:39.141 [2024-11-18T05:02:02.665Z] Total : 98.70 296.11 0.00 0.00 14787.32 249.48 115819.99 00:21:39.141 [2024-11-18 05:02:02.475433] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.141 [2024-11-18 05:02:02.475635] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.141 0 00:21:39.141 [2024-11-18 05:02:02.475781] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.141 [2024-11-18 05:02:02.475804] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:21:39.141 05:02:02 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.141 05:02:02 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:39.400 05:02:02 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:39.400 05:02:02 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:39.400 05:02:02 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:39.400 05:02:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:39.400 05:02:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:39.400 05:02:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:39.400 05:02:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:39.400 05:02:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:39.400 05:02:02 -- bdev/nbd_common.sh@12 -- # local i 00:21:39.400 05:02:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:39.400 05:02:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.400 05:02:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:39.400 /dev/nbd0 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:39.658 05:02:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:39.658 05:02:02 -- common/autotest_common.sh@867 -- # local i 00:21:39.658 05:02:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:39.658 05:02:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:39.658 05:02:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:39.658 05:02:02 -- common/autotest_common.sh@871 -- # break 00:21:39.658 05:02:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:39.658 05:02:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:39.658 05:02:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:39.658 1+0 records in 00:21:39.658 1+0 records out 00:21:39.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487711 s, 8.4 MB/s 00:21:39.658 05:02:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.658 05:02:02 -- common/autotest_common.sh@884 -- # size=4096 00:21:39.658 05:02:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.658 05:02:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:39.658 05:02:02 -- common/autotest_common.sh@887 -- # return 0 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.658 05:02:02 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:39.658 05:02:02 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:21:39.658 05:02:02 -- bdev/bdev_raid.sh@678 -- # continue 00:21:39.658 05:02:02 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:39.658 05:02:02 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:21:39.658 05:02:02 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@12 -- # local i 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.658 05:02:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:39.917 /dev/nbd1 00:21:39.917 05:02:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:39.917 05:02:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:39.917 05:02:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:39.917 05:02:03 -- common/autotest_common.sh@867 -- # local i 00:21:39.917 05:02:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:39.917 05:02:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:39.917 05:02:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:39.917 05:02:03 -- common/autotest_common.sh@871 -- # break 00:21:39.917 05:02:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:39.917 05:02:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:39.917 05:02:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:39.917 1+0 records in 00:21:39.917 1+0 records out 00:21:39.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333014 s, 12.3 MB/s 00:21:39.917 05:02:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.917 05:02:03 -- common/autotest_common.sh@884 -- # size=4096 00:21:39.917 05:02:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.917 05:02:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:39.917 05:02:03 -- common/autotest_common.sh@887 -- # return 0 00:21:39.917 05:02:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:39.917 05:02:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.917 05:02:03 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:39.917 05:02:03 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:39.917 05:02:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:39.917 05:02:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:39.917 05:02:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:39.917 05:02:03 -- bdev/nbd_common.sh@51 -- # local i 00:21:39.917 05:02:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:39.917 05:02:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:40.176 05:02:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:40.176 05:02:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:40.176 05:02:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:40.176 05:02:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:40.176 05:02:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:40.176 05:02:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:40.176 05:02:03 -- bdev/nbd_common.sh@41 -- # break 00:21:40.176 05:02:03 -- bdev/nbd_common.sh@45 -- # return 0 00:21:40.177 05:02:03 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:40.177 05:02:03 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:21:40.177 05:02:03 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:40.177 05:02:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:40.177 05:02:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:40.177 05:02:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:40.177 05:02:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:40.177 05:02:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:40.177 05:02:03 -- bdev/nbd_common.sh@12 -- # local i 00:21:40.177 05:02:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:40.177 05:02:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:40.177 05:02:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:40.436 /dev/nbd1 00:21:40.436 05:02:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:40.436 05:02:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:40.436 05:02:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:40.436 05:02:03 -- common/autotest_common.sh@867 -- # local i 00:21:40.436 05:02:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:40.436 05:02:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:40.436 05:02:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:40.436 05:02:03 -- common/autotest_common.sh@871 -- # break 00:21:40.436 05:02:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:40.436 05:02:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:40.436 05:02:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:40.436 1+0 records in 00:21:40.436 1+0 records out 00:21:40.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252955 s, 16.2 MB/s 00:21:40.436 05:02:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:40.436 05:02:03 -- common/autotest_common.sh@884 -- # size=4096 00:21:40.436 05:02:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:40.436 05:02:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:40.436 05:02:03 -- common/autotest_common.sh@887 -- # return 0 00:21:40.436 05:02:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:40.436 05:02:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:40.436 05:02:03 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:40.436 05:02:03 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:40.436 05:02:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:40.436 05:02:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:40.436 05:02:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:40.436 05:02:03 -- bdev/nbd_common.sh@51 -- # local i 00:21:40.436 05:02:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:40.436 05:02:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@41 -- # break 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@45 -- # return 0 00:21:40.695 05:02:04 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@51 -- # local i 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:40.695 05:02:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:40.954 05:02:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:40.954 05:02:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:40.954 05:02:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:40.954 05:02:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:40.954 05:02:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:40.954 05:02:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:40.954 05:02:04 -- bdev/nbd_common.sh@41 -- # break 00:21:40.954 05:02:04 -- bdev/nbd_common.sh@45 -- # return 0 00:21:40.955 05:02:04 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:40.955 05:02:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:40.955 05:02:04 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:40.955 05:02:04 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:41.213 05:02:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:41.473 [2024-11-18 05:02:04.756942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:41.473 [2024-11-18 05:02:04.757007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.473 [2024-11-18 05:02:04.757035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:21:41.473 [2024-11-18 05:02:04.757049] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.473 [2024-11-18 05:02:04.759304] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.473 [2024-11-18 05:02:04.759346] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:41.473 [2024-11-18 05:02:04.759435] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:41.473 [2024-11-18 05:02:04.759486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:41.473 BaseBdev1 00:21:41.473 05:02:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:41.473 05:02:04 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:41.473 05:02:04 -- bdev/bdev_raid.sh@696 -- # continue 00:21:41.473 05:02:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:41.473 05:02:04 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:41.473 05:02:04 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:41.473 05:02:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:41.732 [2024-11-18 05:02:05.117044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:41.732 [2024-11-18 05:02:05.117104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.733 [2024-11-18 05:02:05.117130] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:21:41.733 [2024-11-18 05:02:05.117143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.733 [2024-11-18 05:02:05.117672] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.733 [2024-11-18 05:02:05.117701] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:41.733 [2024-11-18 05:02:05.117796] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:41.733 [2024-11-18 05:02:05.117815] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:41.733 [2024-11-18 05:02:05.117825] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.733 [2024-11-18 05:02:05.117851] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state configuring 00:21:41.733 [2024-11-18 05:02:05.117911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:41.733 BaseBdev3 00:21:41.733 05:02:05 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:41.733 05:02:05 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:41.733 05:02:05 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:41.992 05:02:05 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:42.251 [2024-11-18 05:02:05.541227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:42.251 [2024-11-18 05:02:05.541499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.251 [2024-11-18 05:02:05.541673] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:21:42.251 [2024-11-18 05:02:05.541783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.251 [2024-11-18 05:02:05.542407] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.252 [2024-11-18 05:02:05.542575] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:42.252 [2024-11-18 05:02:05.542820] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:42.252 [2024-11-18 05:02:05.542967] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:42.252 BaseBdev4 00:21:42.252 05:02:05 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:42.252 05:02:05 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:42.511 [2024-11-18 05:02:05.905320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:42.511 [2024-11-18 05:02:05.905527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.511 [2024-11-18 05:02:05.905601] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:21:42.511 [2024-11-18 05:02:05.905731] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.511 [2024-11-18 05:02:05.906306] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.511 [2024-11-18 05:02:05.906345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:42.511 [2024-11-18 05:02:05.906452] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:42.511 [2024-11-18 05:02:05.906497] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:42.511 spare 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.511 05:02:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.511 [2024-11-18 05:02:06.006627] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c380 00:21:42.511 [2024-11-18 05:02:06.006657] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:42.511 [2024-11-18 05:02:06.006794] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000036870 00:21:42.511 [2024-11-18 05:02:06.007175] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c380 00:21:42.511 [2024-11-18 05:02:06.007194] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c380 00:21:42.511 [2024-11-18 05:02:06.007404] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.788 05:02:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.788 "name": "raid_bdev1", 00:21:42.788 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:42.788 "strip_size_kb": 0, 00:21:42.788 "state": "online", 00:21:42.788 "raid_level": "raid1", 00:21:42.788 "superblock": true, 00:21:42.788 "num_base_bdevs": 4, 00:21:42.788 "num_base_bdevs_discovered": 3, 00:21:42.788 "num_base_bdevs_operational": 3, 00:21:42.788 "base_bdevs_list": [ 00:21:42.788 { 00:21:42.788 "name": "spare", 00:21:42.788 "uuid": "fd2e2403-57d2-5394-81f0-bab0348b814e", 00:21:42.788 "is_configured": true, 00:21:42.788 "data_offset": 2048, 00:21:42.788 "data_size": 63488 00:21:42.788 }, 00:21:42.788 { 00:21:42.788 "name": null, 00:21:42.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.788 "is_configured": false, 00:21:42.788 "data_offset": 2048, 00:21:42.788 "data_size": 63488 00:21:42.788 }, 00:21:42.788 { 00:21:42.788 "name": "BaseBdev3", 00:21:42.788 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:42.788 "is_configured": true, 00:21:42.788 "data_offset": 2048, 00:21:42.788 "data_size": 63488 00:21:42.788 }, 00:21:42.788 { 00:21:42.788 "name": "BaseBdev4", 00:21:42.788 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:42.788 "is_configured": true, 00:21:42.788 "data_offset": 2048, 00:21:42.788 "data_size": 63488 00:21:42.788 } 00:21:42.788 ] 00:21:42.788 }' 00:21:42.788 05:02:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.788 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:21:43.082 05:02:06 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:43.082 05:02:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:43.082 05:02:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:43.082 05:02:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:43.082 05:02:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:43.082 05:02:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.082 05:02:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.357 05:02:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:43.357 "name": "raid_bdev1", 00:21:43.357 "uuid": "bde438fe-ed3f-426d-b87f-108ee2e1b4ea", 00:21:43.357 "strip_size_kb": 0, 00:21:43.357 "state": "online", 00:21:43.357 "raid_level": "raid1", 00:21:43.357 "superblock": true, 00:21:43.357 "num_base_bdevs": 4, 00:21:43.357 "num_base_bdevs_discovered": 3, 00:21:43.357 "num_base_bdevs_operational": 3, 00:21:43.357 "base_bdevs_list": [ 00:21:43.357 { 00:21:43.357 "name": "spare", 00:21:43.357 "uuid": "fd2e2403-57d2-5394-81f0-bab0348b814e", 00:21:43.357 "is_configured": true, 00:21:43.357 "data_offset": 2048, 00:21:43.357 "data_size": 63488 00:21:43.357 }, 00:21:43.357 { 00:21:43.357 "name": null, 00:21:43.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.357 "is_configured": false, 00:21:43.357 "data_offset": 2048, 00:21:43.357 "data_size": 63488 00:21:43.357 }, 00:21:43.357 { 00:21:43.357 "name": "BaseBdev3", 00:21:43.357 "uuid": "7d65b2ec-3b6e-5cc3-a966-941b21ff63b8", 00:21:43.357 "is_configured": true, 00:21:43.357 "data_offset": 2048, 00:21:43.357 "data_size": 63488 00:21:43.357 }, 00:21:43.357 { 00:21:43.357 "name": "BaseBdev4", 00:21:43.357 "uuid": "977d4b54-e36e-56f7-8d14-23a6e2e9a281", 00:21:43.357 "is_configured": true, 00:21:43.357 "data_offset": 2048, 00:21:43.357 "data_size": 63488 00:21:43.357 } 00:21:43.357 ] 00:21:43.357 }' 00:21:43.357 05:02:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:43.357 05:02:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:43.357 05:02:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:43.357 05:02:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:43.357 05:02:06 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.357 05:02:06 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:43.617 05:02:06 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:43.617 05:02:06 -- bdev/bdev_raid.sh@709 -- # killprocess 81804 00:21:43.617 05:02:06 -- common/autotest_common.sh@936 -- # '[' -z 81804 ']' 00:21:43.617 05:02:06 -- common/autotest_common.sh@940 -- # kill -0 81804 00:21:43.617 05:02:06 -- common/autotest_common.sh@941 -- # uname 00:21:43.617 05:02:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:43.617 05:02:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81804 00:21:43.617 killing process with pid 81804 00:21:43.617 Received shutdown signal, test time was about 14.176778 seconds 00:21:43.617 00:21:43.617 Latency(us) 00:21:43.617 [2024-11-18T05:02:07.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.617 [2024-11-18T05:02:07.141Z] =================================================================================================================== 00:21:43.617 [2024-11-18T05:02:07.141Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.617 05:02:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:43.617 05:02:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:43.617 05:02:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81804' 00:21:43.617 05:02:06 -- common/autotest_common.sh@955 -- # kill 81804 00:21:43.617 [2024-11-18 05:02:06.940806] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:43.617 05:02:06 -- common/autotest_common.sh@960 -- # wait 81804 00:21:43.617 [2024-11-18 05:02:06.940888] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.617 [2024-11-18 05:02:06.941029] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.617 [2024-11-18 05:02:06.941050] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c380 name raid_bdev1, state offline 00:21:43.876 [2024-11-18 05:02:07.216553] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:44.813 00:21:44.813 real 0m19.853s 00:21:44.813 user 0m30.140s 00:21:44.813 sys 0m2.605s 00:21:44.813 ************************************ 00:21:44.813 END TEST raid_rebuild_test_sb_io 00:21:44.813 ************************************ 00:21:44.813 05:02:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:44.813 05:02:08 -- common/autotest_common.sh@10 -- # set +x 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:21:44.813 05:02:08 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:44.813 05:02:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.813 05:02:08 -- common/autotest_common.sh@10 -- # set +x 00:21:44.813 ************************************ 00:21:44.813 START TEST raid5f_state_function_test 00:21:44.813 ************************************ 00:21:44.813 05:02:08 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 false 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.813 Process raid pid: 82356 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=82356 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 82356' 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:44.813 05:02:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 82356 /var/tmp/spdk-raid.sock 00:21:44.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:44.813 05:02:08 -- common/autotest_common.sh@829 -- # '[' -z 82356 ']' 00:21:44.813 05:02:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:44.813 05:02:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.813 05:02:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:44.813 05:02:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.813 05:02:08 -- common/autotest_common.sh@10 -- # set +x 00:21:44.813 [2024-11-18 05:02:08.289584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:44.813 [2024-11-18 05:02:08.289709] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.071 [2024-11-18 05:02:08.441611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.331 [2024-11-18 05:02:08.593346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.331 [2024-11-18 05:02:08.738035] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.898 05:02:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.898 05:02:09 -- common/autotest_common.sh@862 -- # return 0 00:21:45.898 05:02:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:46.157 [2024-11-18 05:02:09.424622] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:46.157 [2024-11-18 05:02:09.424674] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:46.157 [2024-11-18 05:02:09.424688] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:46.157 [2024-11-18 05:02:09.424701] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:46.157 [2024-11-18 05:02:09.424711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:46.157 [2024-11-18 05:02:09.424722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:46.157 "name": "Existed_Raid", 00:21:46.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.157 "strip_size_kb": 64, 00:21:46.157 "state": "configuring", 00:21:46.157 "raid_level": "raid5f", 00:21:46.157 "superblock": false, 00:21:46.157 "num_base_bdevs": 3, 00:21:46.157 "num_base_bdevs_discovered": 0, 00:21:46.157 "num_base_bdevs_operational": 3, 00:21:46.157 "base_bdevs_list": [ 00:21:46.157 { 00:21:46.157 "name": "BaseBdev1", 00:21:46.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.157 "is_configured": false, 00:21:46.157 "data_offset": 0, 00:21:46.157 "data_size": 0 00:21:46.157 }, 00:21:46.157 { 00:21:46.157 "name": "BaseBdev2", 00:21:46.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.157 "is_configured": false, 00:21:46.157 "data_offset": 0, 00:21:46.157 "data_size": 0 00:21:46.157 }, 00:21:46.157 { 00:21:46.157 "name": "BaseBdev3", 00:21:46.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.157 "is_configured": false, 00:21:46.157 "data_offset": 0, 00:21:46.157 "data_size": 0 00:21:46.157 } 00:21:46.157 ] 00:21:46.157 }' 00:21:46.157 05:02:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:46.157 05:02:09 -- common/autotest_common.sh@10 -- # set +x 00:21:46.726 05:02:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:46.726 [2024-11-18 05:02:10.136764] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:46.726 [2024-11-18 05:02:10.136807] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:21:46.726 05:02:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:46.985 [2024-11-18 05:02:10.368851] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:46.985 [2024-11-18 05:02:10.368916] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:46.985 [2024-11-18 05:02:10.368928] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:46.985 [2024-11-18 05:02:10.368944] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:46.985 [2024-11-18 05:02:10.368952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:46.985 [2024-11-18 05:02:10.368963] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:46.985 05:02:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:47.245 [2024-11-18 05:02:10.645746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:47.245 BaseBdev1 00:21:47.245 05:02:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:47.245 05:02:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:47.245 05:02:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:47.245 05:02:10 -- common/autotest_common.sh@899 -- # local i 00:21:47.245 05:02:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:47.245 05:02:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:47.245 05:02:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:47.503 05:02:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:47.503 [ 00:21:47.504 { 00:21:47.504 "name": "BaseBdev1", 00:21:47.504 "aliases": [ 00:21:47.504 "4b5a753f-9276-4ff7-b9b6-b29d5a3bb639" 00:21:47.504 ], 00:21:47.504 "product_name": "Malloc disk", 00:21:47.504 "block_size": 512, 00:21:47.504 "num_blocks": 65536, 00:21:47.504 "uuid": "4b5a753f-9276-4ff7-b9b6-b29d5a3bb639", 00:21:47.504 "assigned_rate_limits": { 00:21:47.504 "rw_ios_per_sec": 0, 00:21:47.504 "rw_mbytes_per_sec": 0, 00:21:47.504 "r_mbytes_per_sec": 0, 00:21:47.504 "w_mbytes_per_sec": 0 00:21:47.504 }, 00:21:47.504 "claimed": true, 00:21:47.504 "claim_type": "exclusive_write", 00:21:47.504 "zoned": false, 00:21:47.504 "supported_io_types": { 00:21:47.504 "read": true, 00:21:47.504 "write": true, 00:21:47.504 "unmap": true, 00:21:47.504 "write_zeroes": true, 00:21:47.504 "flush": true, 00:21:47.504 "reset": true, 00:21:47.504 "compare": false, 00:21:47.504 "compare_and_write": false, 00:21:47.504 "abort": true, 00:21:47.504 "nvme_admin": false, 00:21:47.504 "nvme_io": false 00:21:47.504 }, 00:21:47.504 "memory_domains": [ 00:21:47.504 { 00:21:47.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.504 "dma_device_type": 2 00:21:47.504 } 00:21:47.504 ], 00:21:47.504 "driver_specific": {} 00:21:47.504 } 00:21:47.504 ] 00:21:47.504 05:02:11 -- common/autotest_common.sh@905 -- # return 0 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.504 05:02:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.764 05:02:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:47.764 "name": "Existed_Raid", 00:21:47.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.764 "strip_size_kb": 64, 00:21:47.764 "state": "configuring", 00:21:47.764 "raid_level": "raid5f", 00:21:47.764 "superblock": false, 00:21:47.764 "num_base_bdevs": 3, 00:21:47.764 "num_base_bdevs_discovered": 1, 00:21:47.764 "num_base_bdevs_operational": 3, 00:21:47.764 "base_bdevs_list": [ 00:21:47.764 { 00:21:47.764 "name": "BaseBdev1", 00:21:47.764 "uuid": "4b5a753f-9276-4ff7-b9b6-b29d5a3bb639", 00:21:47.764 "is_configured": true, 00:21:47.764 "data_offset": 0, 00:21:47.764 "data_size": 65536 00:21:47.764 }, 00:21:47.764 { 00:21:47.764 "name": "BaseBdev2", 00:21:47.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.764 "is_configured": false, 00:21:47.764 "data_offset": 0, 00:21:47.764 "data_size": 0 00:21:47.764 }, 00:21:47.764 { 00:21:47.764 "name": "BaseBdev3", 00:21:47.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.764 "is_configured": false, 00:21:47.764 "data_offset": 0, 00:21:47.764 "data_size": 0 00:21:47.764 } 00:21:47.764 ] 00:21:47.764 }' 00:21:47.764 05:02:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:47.764 05:02:11 -- common/autotest_common.sh@10 -- # set +x 00:21:48.331 05:02:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:48.331 [2024-11-18 05:02:11.786049] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:48.331 [2024-11-18 05:02:11.786300] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:21:48.331 05:02:11 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:21:48.331 05:02:11 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:48.589 [2024-11-18 05:02:11.970115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.589 [2024-11-18 05:02:11.972033] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:48.589 [2024-11-18 05:02:11.972080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:48.589 [2024-11-18 05:02:11.972092] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:48.589 [2024-11-18 05:02:11.972105] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.589 05:02:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.848 05:02:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.848 "name": "Existed_Raid", 00:21:48.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.848 "strip_size_kb": 64, 00:21:48.848 "state": "configuring", 00:21:48.848 "raid_level": "raid5f", 00:21:48.848 "superblock": false, 00:21:48.848 "num_base_bdevs": 3, 00:21:48.848 "num_base_bdevs_discovered": 1, 00:21:48.848 "num_base_bdevs_operational": 3, 00:21:48.848 "base_bdevs_list": [ 00:21:48.848 { 00:21:48.848 "name": "BaseBdev1", 00:21:48.848 "uuid": "4b5a753f-9276-4ff7-b9b6-b29d5a3bb639", 00:21:48.848 "is_configured": true, 00:21:48.848 "data_offset": 0, 00:21:48.848 "data_size": 65536 00:21:48.848 }, 00:21:48.848 { 00:21:48.848 "name": "BaseBdev2", 00:21:48.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.848 "is_configured": false, 00:21:48.848 "data_offset": 0, 00:21:48.848 "data_size": 0 00:21:48.848 }, 00:21:48.848 { 00:21:48.848 "name": "BaseBdev3", 00:21:48.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.848 "is_configured": false, 00:21:48.848 "data_offset": 0, 00:21:48.848 "data_size": 0 00:21:48.848 } 00:21:48.848 ] 00:21:48.848 }' 00:21:48.848 05:02:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.848 05:02:12 -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 05:02:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:49.366 BaseBdev2 00:21:49.367 [2024-11-18 05:02:12.714507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.367 05:02:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:49.367 05:02:12 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:49.367 05:02:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:49.367 05:02:12 -- common/autotest_common.sh@899 -- # local i 00:21:49.367 05:02:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:49.367 05:02:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:49.367 05:02:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.625 05:02:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:49.626 [ 00:21:49.626 { 00:21:49.626 "name": "BaseBdev2", 00:21:49.626 "aliases": [ 00:21:49.626 "0e59c974-2871-4690-a0c0-a4ae767aa9a6" 00:21:49.626 ], 00:21:49.626 "product_name": "Malloc disk", 00:21:49.626 "block_size": 512, 00:21:49.626 "num_blocks": 65536, 00:21:49.626 "uuid": "0e59c974-2871-4690-a0c0-a4ae767aa9a6", 00:21:49.626 "assigned_rate_limits": { 00:21:49.626 "rw_ios_per_sec": 0, 00:21:49.626 "rw_mbytes_per_sec": 0, 00:21:49.626 "r_mbytes_per_sec": 0, 00:21:49.626 "w_mbytes_per_sec": 0 00:21:49.626 }, 00:21:49.626 "claimed": true, 00:21:49.626 "claim_type": "exclusive_write", 00:21:49.626 "zoned": false, 00:21:49.626 "supported_io_types": { 00:21:49.626 "read": true, 00:21:49.626 "write": true, 00:21:49.626 "unmap": true, 00:21:49.626 "write_zeroes": true, 00:21:49.626 "flush": true, 00:21:49.626 "reset": true, 00:21:49.626 "compare": false, 00:21:49.626 "compare_and_write": false, 00:21:49.626 "abort": true, 00:21:49.626 "nvme_admin": false, 00:21:49.626 "nvme_io": false 00:21:49.626 }, 00:21:49.626 "memory_domains": [ 00:21:49.626 { 00:21:49.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.626 "dma_device_type": 2 00:21:49.626 } 00:21:49.626 ], 00:21:49.626 "driver_specific": {} 00:21:49.626 } 00:21:49.626 ] 00:21:49.626 05:02:13 -- common/autotest_common.sh@905 -- # return 0 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.626 05:02:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.885 05:02:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.885 05:02:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:49.885 "name": "Existed_Raid", 00:21:49.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.885 "strip_size_kb": 64, 00:21:49.885 "state": "configuring", 00:21:49.885 "raid_level": "raid5f", 00:21:49.885 "superblock": false, 00:21:49.885 "num_base_bdevs": 3, 00:21:49.885 "num_base_bdevs_discovered": 2, 00:21:49.885 "num_base_bdevs_operational": 3, 00:21:49.885 "base_bdevs_list": [ 00:21:49.885 { 00:21:49.885 "name": "BaseBdev1", 00:21:49.885 "uuid": "4b5a753f-9276-4ff7-b9b6-b29d5a3bb639", 00:21:49.885 "is_configured": true, 00:21:49.885 "data_offset": 0, 00:21:49.885 "data_size": 65536 00:21:49.885 }, 00:21:49.885 { 00:21:49.885 "name": "BaseBdev2", 00:21:49.885 "uuid": "0e59c974-2871-4690-a0c0-a4ae767aa9a6", 00:21:49.885 "is_configured": true, 00:21:49.885 "data_offset": 0, 00:21:49.885 "data_size": 65536 00:21:49.885 }, 00:21:49.885 { 00:21:49.885 "name": "BaseBdev3", 00:21:49.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.885 "is_configured": false, 00:21:49.885 "data_offset": 0, 00:21:49.885 "data_size": 0 00:21:49.885 } 00:21:49.885 ] 00:21:49.885 }' 00:21:49.885 05:02:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:49.885 05:02:13 -- common/autotest_common.sh@10 -- # set +x 00:21:50.453 05:02:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:50.453 [2024-11-18 05:02:13.956045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:50.453 [2024-11-18 05:02:13.956097] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:21:50.453 [2024-11-18 05:02:13.956111] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:50.453 [2024-11-18 05:02:13.956273] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:21:50.453 [2024-11-18 05:02:13.960887] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:21:50.453 BaseBdev3 00:21:50.453 [2024-11-18 05:02:13.961066] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:21:50.453 [2024-11-18 05:02:13.961387] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.712 05:02:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:50.712 05:02:13 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:50.712 05:02:13 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:50.712 05:02:13 -- common/autotest_common.sh@899 -- # local i 00:21:50.712 05:02:13 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:50.712 05:02:13 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:50.712 05:02:13 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:50.971 05:02:14 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:50.971 [ 00:21:50.971 { 00:21:50.971 "name": "BaseBdev3", 00:21:50.971 "aliases": [ 00:21:50.971 "a2025d93-f396-422a-8674-37d0c954554d" 00:21:50.971 ], 00:21:50.971 "product_name": "Malloc disk", 00:21:50.971 "block_size": 512, 00:21:50.971 "num_blocks": 65536, 00:21:50.971 "uuid": "a2025d93-f396-422a-8674-37d0c954554d", 00:21:50.971 "assigned_rate_limits": { 00:21:50.971 "rw_ios_per_sec": 0, 00:21:50.971 "rw_mbytes_per_sec": 0, 00:21:50.971 "r_mbytes_per_sec": 0, 00:21:50.971 "w_mbytes_per_sec": 0 00:21:50.971 }, 00:21:50.971 "claimed": true, 00:21:50.971 "claim_type": "exclusive_write", 00:21:50.971 "zoned": false, 00:21:50.971 "supported_io_types": { 00:21:50.971 "read": true, 00:21:50.971 "write": true, 00:21:50.971 "unmap": true, 00:21:50.971 "write_zeroes": true, 00:21:50.971 "flush": true, 00:21:50.971 "reset": true, 00:21:50.971 "compare": false, 00:21:50.971 "compare_and_write": false, 00:21:50.971 "abort": true, 00:21:50.971 "nvme_admin": false, 00:21:50.971 "nvme_io": false 00:21:50.971 }, 00:21:50.971 "memory_domains": [ 00:21:50.971 { 00:21:50.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.971 "dma_device_type": 2 00:21:50.971 } 00:21:50.971 ], 00:21:50.971 "driver_specific": {} 00:21:50.971 } 00:21:50.971 ] 00:21:50.971 05:02:14 -- common/autotest_common.sh@905 -- # return 0 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:50.971 05:02:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:50.972 05:02:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:50.972 05:02:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.972 05:02:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.230 05:02:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.230 "name": "Existed_Raid", 00:21:51.230 "uuid": "89281919-8151-4fb1-9b8f-26b0fbddfebe", 00:21:51.230 "strip_size_kb": 64, 00:21:51.230 "state": "online", 00:21:51.230 "raid_level": "raid5f", 00:21:51.230 "superblock": false, 00:21:51.230 "num_base_bdevs": 3, 00:21:51.230 "num_base_bdevs_discovered": 3, 00:21:51.230 "num_base_bdevs_operational": 3, 00:21:51.230 "base_bdevs_list": [ 00:21:51.230 { 00:21:51.230 "name": "BaseBdev1", 00:21:51.230 "uuid": "4b5a753f-9276-4ff7-b9b6-b29d5a3bb639", 00:21:51.230 "is_configured": true, 00:21:51.231 "data_offset": 0, 00:21:51.231 "data_size": 65536 00:21:51.231 }, 00:21:51.231 { 00:21:51.231 "name": "BaseBdev2", 00:21:51.231 "uuid": "0e59c974-2871-4690-a0c0-a4ae767aa9a6", 00:21:51.231 "is_configured": true, 00:21:51.231 "data_offset": 0, 00:21:51.231 "data_size": 65536 00:21:51.231 }, 00:21:51.231 { 00:21:51.231 "name": "BaseBdev3", 00:21:51.231 "uuid": "a2025d93-f396-422a-8674-37d0c954554d", 00:21:51.231 "is_configured": true, 00:21:51.231 "data_offset": 0, 00:21:51.231 "data_size": 65536 00:21:51.231 } 00:21:51.231 ] 00:21:51.231 }' 00:21:51.231 05:02:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.231 05:02:14 -- common/autotest_common.sh@10 -- # set +x 00:21:51.489 05:02:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:51.748 [2024-11-18 05:02:15.234340] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:52.007 05:02:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.008 05:02:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.008 05:02:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:52.008 "name": "Existed_Raid", 00:21:52.008 "uuid": "89281919-8151-4fb1-9b8f-26b0fbddfebe", 00:21:52.008 "strip_size_kb": 64, 00:21:52.008 "state": "online", 00:21:52.008 "raid_level": "raid5f", 00:21:52.008 "superblock": false, 00:21:52.008 "num_base_bdevs": 3, 00:21:52.008 "num_base_bdevs_discovered": 2, 00:21:52.008 "num_base_bdevs_operational": 2, 00:21:52.008 "base_bdevs_list": [ 00:21:52.008 { 00:21:52.008 "name": null, 00:21:52.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.008 "is_configured": false, 00:21:52.008 "data_offset": 0, 00:21:52.008 "data_size": 65536 00:21:52.008 }, 00:21:52.008 { 00:21:52.008 "name": "BaseBdev2", 00:21:52.008 "uuid": "0e59c974-2871-4690-a0c0-a4ae767aa9a6", 00:21:52.008 "is_configured": true, 00:21:52.008 "data_offset": 0, 00:21:52.008 "data_size": 65536 00:21:52.008 }, 00:21:52.008 { 00:21:52.008 "name": "BaseBdev3", 00:21:52.008 "uuid": "a2025d93-f396-422a-8674-37d0c954554d", 00:21:52.008 "is_configured": true, 00:21:52.008 "data_offset": 0, 00:21:52.008 "data_size": 65536 00:21:52.008 } 00:21:52.008 ] 00:21:52.008 }' 00:21:52.008 05:02:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:52.008 05:02:15 -- common/autotest_common.sh@10 -- # set +x 00:21:52.576 05:02:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:52.576 05:02:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:52.576 05:02:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.576 05:02:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:52.576 05:02:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:52.576 05:02:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:52.576 05:02:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:52.835 [2024-11-18 05:02:16.253953] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:52.835 [2024-11-18 05:02:16.254008] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.835 [2024-11-18 05:02:16.254072] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.835 05:02:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:52.835 05:02:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:52.835 05:02:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.835 05:02:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:53.094 05:02:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:53.094 05:02:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:53.094 05:02:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:53.353 [2024-11-18 05:02:16.732002] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:53.353 [2024-11-18 05:02:16.732057] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:21:53.353 05:02:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:53.353 05:02:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:53.353 05:02:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.353 05:02:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:53.612 05:02:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:53.612 05:02:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:53.612 05:02:17 -- bdev/bdev_raid.sh@287 -- # killprocess 82356 00:21:53.612 05:02:17 -- common/autotest_common.sh@936 -- # '[' -z 82356 ']' 00:21:53.612 05:02:17 -- common/autotest_common.sh@940 -- # kill -0 82356 00:21:53.612 05:02:17 -- common/autotest_common.sh@941 -- # uname 00:21:53.612 05:02:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:53.612 05:02:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82356 00:21:53.612 killing process with pid 82356 00:21:53.613 05:02:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:53.613 05:02:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:53.613 05:02:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82356' 00:21:53.613 05:02:17 -- common/autotest_common.sh@955 -- # kill 82356 00:21:53.613 [2024-11-18 05:02:17.084089] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:53.613 05:02:17 -- common/autotest_common.sh@960 -- # wait 82356 00:21:53.613 [2024-11-18 05:02:17.084184] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.550 ************************************ 00:21:54.550 END TEST raid5f_state_function_test 00:21:54.550 ************************************ 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:54.550 00:21:54.550 real 0m9.778s 00:21:54.550 user 0m16.319s 00:21:54.550 sys 0m1.416s 00:21:54.550 05:02:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:54.550 05:02:18 -- common/autotest_common.sh@10 -- # set +x 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:21:54.550 05:02:18 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:54.550 05:02:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:54.550 05:02:18 -- common/autotest_common.sh@10 -- # set +x 00:21:54.550 ************************************ 00:21:54.550 START TEST raid5f_state_function_test_sb 00:21:54.550 ************************************ 00:21:54.550 05:02:18 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 true 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:54.550 05:02:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=82685 00:21:54.809 Process raid pid: 82685 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 82685' 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 82685 /var/tmp/spdk-raid.sock 00:21:54.809 05:02:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:54.810 05:02:18 -- common/autotest_common.sh@829 -- # '[' -z 82685 ']' 00:21:54.810 05:02:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:54.810 05:02:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:54.810 05:02:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:54.810 05:02:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.810 05:02:18 -- common/autotest_common.sh@10 -- # set +x 00:21:54.810 [2024-11-18 05:02:18.134780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:54.810 [2024-11-18 05:02:18.134947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.810 [2024-11-18 05:02:18.302257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.069 [2024-11-18 05:02:18.466912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.328 [2024-11-18 05:02:18.611052] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.587 05:02:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.587 05:02:19 -- common/autotest_common.sh@862 -- # return 0 00:21:55.587 05:02:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:55.849 [2024-11-18 05:02:19.239610] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:55.849 [2024-11-18 05:02:19.239661] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:55.849 [2024-11-18 05:02:19.239690] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:55.849 [2024-11-18 05:02:19.239703] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:55.849 [2024-11-18 05:02:19.239711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:55.849 [2024-11-18 05:02:19.239722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.849 05:02:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.115 05:02:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:56.115 "name": "Existed_Raid", 00:21:56.115 "uuid": "f577c565-7c4f-4b97-99aa-0f57d1b72058", 00:21:56.115 "strip_size_kb": 64, 00:21:56.115 "state": "configuring", 00:21:56.115 "raid_level": "raid5f", 00:21:56.115 "superblock": true, 00:21:56.115 "num_base_bdevs": 3, 00:21:56.115 "num_base_bdevs_discovered": 0, 00:21:56.115 "num_base_bdevs_operational": 3, 00:21:56.115 "base_bdevs_list": [ 00:21:56.115 { 00:21:56.115 "name": "BaseBdev1", 00:21:56.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.115 "is_configured": false, 00:21:56.115 "data_offset": 0, 00:21:56.115 "data_size": 0 00:21:56.115 }, 00:21:56.115 { 00:21:56.115 "name": "BaseBdev2", 00:21:56.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.115 "is_configured": false, 00:21:56.115 "data_offset": 0, 00:21:56.115 "data_size": 0 00:21:56.115 }, 00:21:56.115 { 00:21:56.115 "name": "BaseBdev3", 00:21:56.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.115 "is_configured": false, 00:21:56.115 "data_offset": 0, 00:21:56.115 "data_size": 0 00:21:56.115 } 00:21:56.115 ] 00:21:56.115 }' 00:21:56.115 05:02:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:56.115 05:02:19 -- common/autotest_common.sh@10 -- # set +x 00:21:56.374 05:02:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:56.633 [2024-11-18 05:02:19.991702] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:56.633 [2024-11-18 05:02:19.991746] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:21:56.633 05:02:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:56.891 [2024-11-18 05:02:20.243849] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:56.891 [2024-11-18 05:02:20.243917] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:56.891 [2024-11-18 05:02:20.243929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:56.891 [2024-11-18 05:02:20.243944] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:56.891 [2024-11-18 05:02:20.243952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:56.891 [2024-11-18 05:02:20.243963] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:56.891 05:02:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:57.151 [2024-11-18 05:02:20.516383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:57.151 BaseBdev1 00:21:57.151 05:02:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:57.151 05:02:20 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:57.151 05:02:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:57.151 05:02:20 -- common/autotest_common.sh@899 -- # local i 00:21:57.151 05:02:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:57.151 05:02:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:57.151 05:02:20 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:57.410 05:02:20 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:57.669 [ 00:21:57.669 { 00:21:57.669 "name": "BaseBdev1", 00:21:57.669 "aliases": [ 00:21:57.669 "17e67f7a-0f18-450f-852f-24f645f5c53e" 00:21:57.669 ], 00:21:57.669 "product_name": "Malloc disk", 00:21:57.669 "block_size": 512, 00:21:57.669 "num_blocks": 65536, 00:21:57.669 "uuid": "17e67f7a-0f18-450f-852f-24f645f5c53e", 00:21:57.669 "assigned_rate_limits": { 00:21:57.669 "rw_ios_per_sec": 0, 00:21:57.669 "rw_mbytes_per_sec": 0, 00:21:57.669 "r_mbytes_per_sec": 0, 00:21:57.669 "w_mbytes_per_sec": 0 00:21:57.669 }, 00:21:57.669 "claimed": true, 00:21:57.669 "claim_type": "exclusive_write", 00:21:57.669 "zoned": false, 00:21:57.669 "supported_io_types": { 00:21:57.669 "read": true, 00:21:57.669 "write": true, 00:21:57.669 "unmap": true, 00:21:57.669 "write_zeroes": true, 00:21:57.669 "flush": true, 00:21:57.669 "reset": true, 00:21:57.669 "compare": false, 00:21:57.669 "compare_and_write": false, 00:21:57.669 "abort": true, 00:21:57.669 "nvme_admin": false, 00:21:57.669 "nvme_io": false 00:21:57.669 }, 00:21:57.669 "memory_domains": [ 00:21:57.669 { 00:21:57.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.669 "dma_device_type": 2 00:21:57.669 } 00:21:57.669 ], 00:21:57.669 "driver_specific": {} 00:21:57.669 } 00:21:57.669 ] 00:21:57.669 05:02:20 -- common/autotest_common.sh@905 -- # return 0 00:21:57.669 05:02:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:57.669 05:02:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:57.670 05:02:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:57.670 05:02:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:57.670 05:02:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:57.670 05:02:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:57.670 05:02:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:57.670 05:02:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:57.670 05:02:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:57.670 05:02:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:57.670 05:02:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.670 05:02:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.670 05:02:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:57.670 "name": "Existed_Raid", 00:21:57.670 "uuid": "c547de8f-acd3-4b23-9585-0f7bb088faee", 00:21:57.670 "strip_size_kb": 64, 00:21:57.670 "state": "configuring", 00:21:57.670 "raid_level": "raid5f", 00:21:57.670 "superblock": true, 00:21:57.670 "num_base_bdevs": 3, 00:21:57.670 "num_base_bdevs_discovered": 1, 00:21:57.670 "num_base_bdevs_operational": 3, 00:21:57.670 "base_bdevs_list": [ 00:21:57.670 { 00:21:57.670 "name": "BaseBdev1", 00:21:57.670 "uuid": "17e67f7a-0f18-450f-852f-24f645f5c53e", 00:21:57.670 "is_configured": true, 00:21:57.670 "data_offset": 2048, 00:21:57.670 "data_size": 63488 00:21:57.670 }, 00:21:57.670 { 00:21:57.670 "name": "BaseBdev2", 00:21:57.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.670 "is_configured": false, 00:21:57.670 "data_offset": 0, 00:21:57.670 "data_size": 0 00:21:57.670 }, 00:21:57.670 { 00:21:57.670 "name": "BaseBdev3", 00:21:57.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.670 "is_configured": false, 00:21:57.670 "data_offset": 0, 00:21:57.670 "data_size": 0 00:21:57.670 } 00:21:57.670 ] 00:21:57.670 }' 00:21:57.670 05:02:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:57.670 05:02:21 -- common/autotest_common.sh@10 -- # set +x 00:21:58.238 05:02:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:58.238 [2024-11-18 05:02:21.736747] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:58.239 [2024-11-18 05:02:21.736799] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:21:58.239 05:02:21 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:21:58.239 05:02:21 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:58.807 05:02:22 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:58.807 BaseBdev1 00:21:58.807 05:02:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:21:58.807 05:02:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:58.807 05:02:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:58.807 05:02:22 -- common/autotest_common.sh@899 -- # local i 00:21:58.807 05:02:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:58.807 05:02:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:58.807 05:02:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:59.066 05:02:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:59.326 [ 00:21:59.326 { 00:21:59.326 "name": "BaseBdev1", 00:21:59.326 "aliases": [ 00:21:59.326 "7aca5b88-c65e-4490-ba64-2dc23aa30530" 00:21:59.326 ], 00:21:59.326 "product_name": "Malloc disk", 00:21:59.326 "block_size": 512, 00:21:59.326 "num_blocks": 65536, 00:21:59.326 "uuid": "7aca5b88-c65e-4490-ba64-2dc23aa30530", 00:21:59.326 "assigned_rate_limits": { 00:21:59.326 "rw_ios_per_sec": 0, 00:21:59.326 "rw_mbytes_per_sec": 0, 00:21:59.326 "r_mbytes_per_sec": 0, 00:21:59.326 "w_mbytes_per_sec": 0 00:21:59.326 }, 00:21:59.326 "claimed": false, 00:21:59.326 "zoned": false, 00:21:59.326 "supported_io_types": { 00:21:59.326 "read": true, 00:21:59.326 "write": true, 00:21:59.326 "unmap": true, 00:21:59.326 "write_zeroes": true, 00:21:59.326 "flush": true, 00:21:59.326 "reset": true, 00:21:59.326 "compare": false, 00:21:59.326 "compare_and_write": false, 00:21:59.326 "abort": true, 00:21:59.326 "nvme_admin": false, 00:21:59.326 "nvme_io": false 00:21:59.326 }, 00:21:59.326 "memory_domains": [ 00:21:59.326 { 00:21:59.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.326 "dma_device_type": 2 00:21:59.326 } 00:21:59.326 ], 00:21:59.326 "driver_specific": {} 00:21:59.326 } 00:21:59.326 ] 00:21:59.326 05:02:22 -- common/autotest_common.sh@905 -- # return 0 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:59.326 [2024-11-18 05:02:22.785643] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:59.326 [2024-11-18 05:02:22.787469] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:59.326 [2024-11-18 05:02:22.787544] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:59.326 [2024-11-18 05:02:22.787557] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:59.326 [2024-11-18 05:02:22.787571] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.326 05:02:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.585 05:02:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.585 "name": "Existed_Raid", 00:21:59.585 "uuid": "79efdb40-c5d8-431b-a69d-4af5ec03df15", 00:21:59.585 "strip_size_kb": 64, 00:21:59.585 "state": "configuring", 00:21:59.585 "raid_level": "raid5f", 00:21:59.585 "superblock": true, 00:21:59.585 "num_base_bdevs": 3, 00:21:59.585 "num_base_bdevs_discovered": 1, 00:21:59.585 "num_base_bdevs_operational": 3, 00:21:59.585 "base_bdevs_list": [ 00:21:59.585 { 00:21:59.585 "name": "BaseBdev1", 00:21:59.585 "uuid": "7aca5b88-c65e-4490-ba64-2dc23aa30530", 00:21:59.585 "is_configured": true, 00:21:59.585 "data_offset": 2048, 00:21:59.585 "data_size": 63488 00:21:59.585 }, 00:21:59.585 { 00:21:59.585 "name": "BaseBdev2", 00:21:59.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.585 "is_configured": false, 00:21:59.585 "data_offset": 0, 00:21:59.585 "data_size": 0 00:21:59.585 }, 00:21:59.585 { 00:21:59.585 "name": "BaseBdev3", 00:21:59.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.585 "is_configured": false, 00:21:59.585 "data_offset": 0, 00:21:59.585 "data_size": 0 00:21:59.585 } 00:21:59.585 ] 00:21:59.585 }' 00:21:59.585 05:02:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.585 05:02:22 -- common/autotest_common.sh@10 -- # set +x 00:21:59.844 05:02:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:00.104 [2024-11-18 05:02:23.546037] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.104 BaseBdev2 00:22:00.104 05:02:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:00.104 05:02:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:00.104 05:02:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:00.104 05:02:23 -- common/autotest_common.sh@899 -- # local i 00:22:00.104 05:02:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:00.104 05:02:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:00.104 05:02:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.364 05:02:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:00.623 [ 00:22:00.623 { 00:22:00.623 "name": "BaseBdev2", 00:22:00.623 "aliases": [ 00:22:00.623 "b35a92c1-9abb-45ee-abc9-a97f223c44f5" 00:22:00.623 ], 00:22:00.623 "product_name": "Malloc disk", 00:22:00.623 "block_size": 512, 00:22:00.623 "num_blocks": 65536, 00:22:00.623 "uuid": "b35a92c1-9abb-45ee-abc9-a97f223c44f5", 00:22:00.623 "assigned_rate_limits": { 00:22:00.623 "rw_ios_per_sec": 0, 00:22:00.623 "rw_mbytes_per_sec": 0, 00:22:00.623 "r_mbytes_per_sec": 0, 00:22:00.623 "w_mbytes_per_sec": 0 00:22:00.623 }, 00:22:00.623 "claimed": true, 00:22:00.623 "claim_type": "exclusive_write", 00:22:00.623 "zoned": false, 00:22:00.623 "supported_io_types": { 00:22:00.623 "read": true, 00:22:00.623 "write": true, 00:22:00.623 "unmap": true, 00:22:00.623 "write_zeroes": true, 00:22:00.623 "flush": true, 00:22:00.623 "reset": true, 00:22:00.623 "compare": false, 00:22:00.623 "compare_and_write": false, 00:22:00.623 "abort": true, 00:22:00.623 "nvme_admin": false, 00:22:00.623 "nvme_io": false 00:22:00.623 }, 00:22:00.623 "memory_domains": [ 00:22:00.623 { 00:22:00.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.623 "dma_device_type": 2 00:22:00.623 } 00:22:00.623 ], 00:22:00.623 "driver_specific": {} 00:22:00.623 } 00:22:00.623 ] 00:22:00.623 05:02:24 -- common/autotest_common.sh@905 -- # return 0 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.623 05:02:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.882 05:02:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:00.882 "name": "Existed_Raid", 00:22:00.882 "uuid": "79efdb40-c5d8-431b-a69d-4af5ec03df15", 00:22:00.882 "strip_size_kb": 64, 00:22:00.882 "state": "configuring", 00:22:00.882 "raid_level": "raid5f", 00:22:00.882 "superblock": true, 00:22:00.882 "num_base_bdevs": 3, 00:22:00.882 "num_base_bdevs_discovered": 2, 00:22:00.882 "num_base_bdevs_operational": 3, 00:22:00.882 "base_bdevs_list": [ 00:22:00.882 { 00:22:00.882 "name": "BaseBdev1", 00:22:00.882 "uuid": "7aca5b88-c65e-4490-ba64-2dc23aa30530", 00:22:00.882 "is_configured": true, 00:22:00.882 "data_offset": 2048, 00:22:00.882 "data_size": 63488 00:22:00.882 }, 00:22:00.882 { 00:22:00.882 "name": "BaseBdev2", 00:22:00.882 "uuid": "b35a92c1-9abb-45ee-abc9-a97f223c44f5", 00:22:00.882 "is_configured": true, 00:22:00.883 "data_offset": 2048, 00:22:00.883 "data_size": 63488 00:22:00.883 }, 00:22:00.883 { 00:22:00.883 "name": "BaseBdev3", 00:22:00.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.883 "is_configured": false, 00:22:00.883 "data_offset": 0, 00:22:00.883 "data_size": 0 00:22:00.883 } 00:22:00.883 ] 00:22:00.883 }' 00:22:00.883 05:02:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:00.883 05:02:24 -- common/autotest_common.sh@10 -- # set +x 00:22:01.141 05:02:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:01.401 [2024-11-18 05:02:24.698604] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:01.401 [2024-11-18 05:02:24.698885] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:22:01.401 [2024-11-18 05:02:24.698939] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:01.401 [2024-11-18 05:02:24.699043] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:22:01.401 BaseBdev3 00:22:01.401 [2024-11-18 05:02:24.703869] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:22:01.401 [2024-11-18 05:02:24.703895] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:22:01.401 [2024-11-18 05:02:24.704084] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.401 05:02:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:01.401 05:02:24 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:01.401 05:02:24 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:01.401 05:02:24 -- common/autotest_common.sh@899 -- # local i 00:22:01.401 05:02:24 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:01.401 05:02:24 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:01.401 05:02:24 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:01.401 05:02:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:01.660 [ 00:22:01.660 { 00:22:01.660 "name": "BaseBdev3", 00:22:01.660 "aliases": [ 00:22:01.660 "c9015a8c-46b8-4df3-adab-6e0312b581ec" 00:22:01.660 ], 00:22:01.660 "product_name": "Malloc disk", 00:22:01.660 "block_size": 512, 00:22:01.660 "num_blocks": 65536, 00:22:01.660 "uuid": "c9015a8c-46b8-4df3-adab-6e0312b581ec", 00:22:01.660 "assigned_rate_limits": { 00:22:01.660 "rw_ios_per_sec": 0, 00:22:01.660 "rw_mbytes_per_sec": 0, 00:22:01.660 "r_mbytes_per_sec": 0, 00:22:01.660 "w_mbytes_per_sec": 0 00:22:01.660 }, 00:22:01.660 "claimed": true, 00:22:01.660 "claim_type": "exclusive_write", 00:22:01.660 "zoned": false, 00:22:01.660 "supported_io_types": { 00:22:01.660 "read": true, 00:22:01.660 "write": true, 00:22:01.660 "unmap": true, 00:22:01.660 "write_zeroes": true, 00:22:01.660 "flush": true, 00:22:01.660 "reset": true, 00:22:01.660 "compare": false, 00:22:01.660 "compare_and_write": false, 00:22:01.660 "abort": true, 00:22:01.660 "nvme_admin": false, 00:22:01.660 "nvme_io": false 00:22:01.660 }, 00:22:01.660 "memory_domains": [ 00:22:01.660 { 00:22:01.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.660 "dma_device_type": 2 00:22:01.660 } 00:22:01.660 ], 00:22:01.660 "driver_specific": {} 00:22:01.660 } 00:22:01.660 ] 00:22:01.660 05:02:25 -- common/autotest_common.sh@905 -- # return 0 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.660 05:02:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.919 05:02:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.919 "name": "Existed_Raid", 00:22:01.919 "uuid": "79efdb40-c5d8-431b-a69d-4af5ec03df15", 00:22:01.919 "strip_size_kb": 64, 00:22:01.919 "state": "online", 00:22:01.919 "raid_level": "raid5f", 00:22:01.919 "superblock": true, 00:22:01.919 "num_base_bdevs": 3, 00:22:01.919 "num_base_bdevs_discovered": 3, 00:22:01.919 "num_base_bdevs_operational": 3, 00:22:01.919 "base_bdevs_list": [ 00:22:01.919 { 00:22:01.919 "name": "BaseBdev1", 00:22:01.919 "uuid": "7aca5b88-c65e-4490-ba64-2dc23aa30530", 00:22:01.919 "is_configured": true, 00:22:01.919 "data_offset": 2048, 00:22:01.919 "data_size": 63488 00:22:01.919 }, 00:22:01.919 { 00:22:01.919 "name": "BaseBdev2", 00:22:01.919 "uuid": "b35a92c1-9abb-45ee-abc9-a97f223c44f5", 00:22:01.919 "is_configured": true, 00:22:01.919 "data_offset": 2048, 00:22:01.919 "data_size": 63488 00:22:01.919 }, 00:22:01.919 { 00:22:01.919 "name": "BaseBdev3", 00:22:01.919 "uuid": "c9015a8c-46b8-4df3-adab-6e0312b581ec", 00:22:01.919 "is_configured": true, 00:22:01.919 "data_offset": 2048, 00:22:01.919 "data_size": 63488 00:22:01.919 } 00:22:01.919 ] 00:22:01.919 }' 00:22:01.919 05:02:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.919 05:02:25 -- common/autotest_common.sh@10 -- # set +x 00:22:02.178 05:02:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:02.178 [2024-11-18 05:02:25.688879] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.437 05:02:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.695 05:02:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:02.695 "name": "Existed_Raid", 00:22:02.695 "uuid": "79efdb40-c5d8-431b-a69d-4af5ec03df15", 00:22:02.695 "strip_size_kb": 64, 00:22:02.695 "state": "online", 00:22:02.695 "raid_level": "raid5f", 00:22:02.695 "superblock": true, 00:22:02.695 "num_base_bdevs": 3, 00:22:02.695 "num_base_bdevs_discovered": 2, 00:22:02.695 "num_base_bdevs_operational": 2, 00:22:02.695 "base_bdevs_list": [ 00:22:02.695 { 00:22:02.695 "name": null, 00:22:02.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.695 "is_configured": false, 00:22:02.695 "data_offset": 2048, 00:22:02.695 "data_size": 63488 00:22:02.695 }, 00:22:02.695 { 00:22:02.695 "name": "BaseBdev2", 00:22:02.695 "uuid": "b35a92c1-9abb-45ee-abc9-a97f223c44f5", 00:22:02.695 "is_configured": true, 00:22:02.695 "data_offset": 2048, 00:22:02.695 "data_size": 63488 00:22:02.695 }, 00:22:02.695 { 00:22:02.695 "name": "BaseBdev3", 00:22:02.695 "uuid": "c9015a8c-46b8-4df3-adab-6e0312b581ec", 00:22:02.695 "is_configured": true, 00:22:02.695 "data_offset": 2048, 00:22:02.696 "data_size": 63488 00:22:02.696 } 00:22:02.696 ] 00:22:02.696 }' 00:22:02.696 05:02:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:02.696 05:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:02.954 05:02:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:02.954 05:02:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:02.954 05:02:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.954 05:02:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:03.212 05:02:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:03.212 05:02:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:03.212 05:02:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:03.470 [2024-11-18 05:02:26.840988] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:03.470 [2024-11-18 05:02:26.841023] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:03.470 [2024-11-18 05:02:26.841087] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:03.470 05:02:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:03.470 05:02:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:03.470 05:02:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.471 05:02:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:03.729 05:02:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:03.729 05:02:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:03.729 05:02:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:03.988 [2024-11-18 05:02:27.331253] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:03.988 [2024-11-18 05:02:27.331320] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:22:03.988 05:02:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:03.988 05:02:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:03.988 05:02:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.988 05:02:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:04.247 05:02:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:04.247 05:02:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:04.247 05:02:27 -- bdev/bdev_raid.sh@287 -- # killprocess 82685 00:22:04.247 05:02:27 -- common/autotest_common.sh@936 -- # '[' -z 82685 ']' 00:22:04.247 05:02:27 -- common/autotest_common.sh@940 -- # kill -0 82685 00:22:04.247 05:02:27 -- common/autotest_common.sh@941 -- # uname 00:22:04.247 05:02:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:04.247 05:02:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82685 00:22:04.247 killing process with pid 82685 00:22:04.247 05:02:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:04.247 05:02:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:04.247 05:02:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82685' 00:22:04.247 05:02:27 -- common/autotest_common.sh@955 -- # kill 82685 00:22:04.247 [2024-11-18 05:02:27.625479] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:04.247 05:02:27 -- common/autotest_common.sh@960 -- # wait 82685 00:22:04.247 [2024-11-18 05:02:27.625639] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:05.183 00:22:05.183 real 0m10.498s 00:22:05.183 user 0m17.551s 00:22:05.183 sys 0m1.520s 00:22:05.183 05:02:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:05.183 05:02:28 -- common/autotest_common.sh@10 -- # set +x 00:22:05.183 ************************************ 00:22:05.183 END TEST raid5f_state_function_test_sb 00:22:05.183 ************************************ 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:05.183 05:02:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:22:05.183 05:02:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:05.183 05:02:28 -- common/autotest_common.sh@10 -- # set +x 00:22:05.183 ************************************ 00:22:05.183 START TEST raid5f_superblock_test 00:22:05.183 ************************************ 00:22:05.183 05:02:28 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 3 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@357 -- # raid_pid=83034 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@358 -- # waitforlisten 83034 /var/tmp/spdk-raid.sock 00:22:05.183 05:02:28 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:05.183 05:02:28 -- common/autotest_common.sh@829 -- # '[' -z 83034 ']' 00:22:05.183 05:02:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:05.183 05:02:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:05.183 05:02:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:05.183 05:02:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.183 05:02:28 -- common/autotest_common.sh@10 -- # set +x 00:22:05.183 [2024-11-18 05:02:28.683532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:05.183 [2024-11-18 05:02:28.683695] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83034 ] 00:22:05.442 [2024-11-18 05:02:28.855000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.701 [2024-11-18 05:02:29.063537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.701 [2024-11-18 05:02:29.203630] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:06.268 05:02:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.268 05:02:29 -- common/autotest_common.sh@862 -- # return 0 00:22:06.268 05:02:29 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:22:06.268 05:02:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:06.268 05:02:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:22:06.268 05:02:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:22:06.268 05:02:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:06.269 05:02:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:06.269 05:02:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:06.269 05:02:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:06.269 05:02:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:06.527 malloc1 00:22:06.527 05:02:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:06.527 [2024-11-18 05:02:30.033503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:06.527 [2024-11-18 05:02:30.033622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.527 [2024-11-18 05:02:30.033661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:22:06.527 [2024-11-18 05:02:30.033676] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.527 [2024-11-18 05:02:30.036033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.527 [2024-11-18 05:02:30.036071] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:06.527 pt1 00:22:06.788 05:02:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:06.788 05:02:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:06.788 05:02:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:22:06.788 05:02:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:22:06.788 05:02:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:06.788 05:02:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:06.788 05:02:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:06.788 05:02:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:06.788 05:02:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:06.788 malloc2 00:22:06.788 05:02:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:07.047 [2024-11-18 05:02:30.471405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:07.047 [2024-11-18 05:02:30.471477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.047 [2024-11-18 05:02:30.471506] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:22:07.047 [2024-11-18 05:02:30.471520] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.047 [2024-11-18 05:02:30.473582] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.047 [2024-11-18 05:02:30.473618] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:07.047 pt2 00:22:07.047 05:02:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:07.047 05:02:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:07.047 05:02:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:22:07.047 05:02:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:22:07.047 05:02:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:07.047 05:02:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:07.047 05:02:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:07.047 05:02:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:07.047 05:02:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:07.307 malloc3 00:22:07.307 05:02:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:07.566 [2024-11-18 05:02:30.870056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:07.566 [2024-11-18 05:02:30.870147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.566 [2024-11-18 05:02:30.870178] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:22:07.566 [2024-11-18 05:02:30.870192] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.566 [2024-11-18 05:02:30.872342] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.566 [2024-11-18 05:02:30.872395] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:07.566 pt3 00:22:07.566 05:02:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:07.566 05:02:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:07.566 05:02:30 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:07.566 [2024-11-18 05:02:31.054101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:07.566 [2024-11-18 05:02:31.055901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:07.566 [2024-11-18 05:02:31.055994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:07.566 [2024-11-18 05:02:31.056228] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:22:07.566 [2024-11-18 05:02:31.056289] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:07.566 [2024-11-18 05:02:31.056400] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:22:07.566 [2024-11-18 05:02:31.060692] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:22:07.566 [2024-11-18 05:02:31.060720] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:22:07.566 [2024-11-18 05:02:31.060949] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.566 05:02:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.826 05:02:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:07.826 "name": "raid_bdev1", 00:22:07.826 "uuid": "f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8", 00:22:07.826 "strip_size_kb": 64, 00:22:07.826 "state": "online", 00:22:07.826 "raid_level": "raid5f", 00:22:07.826 "superblock": true, 00:22:07.826 "num_base_bdevs": 3, 00:22:07.826 "num_base_bdevs_discovered": 3, 00:22:07.826 "num_base_bdevs_operational": 3, 00:22:07.826 "base_bdevs_list": [ 00:22:07.826 { 00:22:07.826 "name": "pt1", 00:22:07.826 "uuid": "98f6dcfb-78d8-533a-bfdd-bd6c187fb41d", 00:22:07.826 "is_configured": true, 00:22:07.826 "data_offset": 2048, 00:22:07.826 "data_size": 63488 00:22:07.826 }, 00:22:07.826 { 00:22:07.826 "name": "pt2", 00:22:07.826 "uuid": "0a370bad-f32c-572e-832c-0547f6d3a33c", 00:22:07.826 "is_configured": true, 00:22:07.826 "data_offset": 2048, 00:22:07.826 "data_size": 63488 00:22:07.826 }, 00:22:07.826 { 00:22:07.826 "name": "pt3", 00:22:07.826 "uuid": "a1238f83-0d04-5a44-bf2b-7ecb9e1aed08", 00:22:07.826 "is_configured": true, 00:22:07.826 "data_offset": 2048, 00:22:07.826 "data_size": 63488 00:22:07.826 } 00:22:07.826 ] 00:22:07.826 }' 00:22:07.826 05:02:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:07.826 05:02:31 -- common/autotest_common.sh@10 -- # set +x 00:22:08.085 05:02:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:08.085 05:02:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:08.344 [2024-11-18 05:02:31.729560] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:08.344 05:02:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8 00:22:08.344 05:02:31 -- bdev/bdev_raid.sh@380 -- # '[' -z f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8 ']' 00:22:08.344 05:02:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:08.604 [2024-11-18 05:02:31.921455] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:08.604 [2024-11-18 05:02:31.921632] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:08.604 [2024-11-18 05:02:31.921733] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.604 [2024-11-18 05:02:31.921819] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:08.604 [2024-11-18 05:02:31.921837] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:22:08.604 05:02:31 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:08.604 05:02:31 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.863 05:02:32 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:08.864 05:02:32 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:08.864 05:02:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:08.864 05:02:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:08.864 05:02:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:08.864 05:02:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:09.123 05:02:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:09.123 05:02:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:09.387 05:02:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:09.387 05:02:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:09.669 05:02:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:09.669 05:02:32 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:09.669 05:02:32 -- common/autotest_common.sh@650 -- # local es=0 00:22:09.669 05:02:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:09.669 05:02:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:09.669 05:02:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.669 05:02:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:09.669 05:02:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.669 05:02:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:09.669 05:02:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.669 05:02:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:09.669 05:02:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:09.669 05:02:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:09.669 [2024-11-18 05:02:33.097697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:09.669 [2024-11-18 05:02:33.099672] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:09.669 [2024-11-18 05:02:33.099720] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:09.669 [2024-11-18 05:02:33.099775] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:09.669 [2024-11-18 05:02:33.099845] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:09.669 [2024-11-18 05:02:33.099874] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:09.669 [2024-11-18 05:02:33.099893] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:09.669 [2024-11-18 05:02:33.099907] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:22:09.669 request: 00:22:09.669 { 00:22:09.669 "name": "raid_bdev1", 00:22:09.669 "raid_level": "raid5f", 00:22:09.669 "base_bdevs": [ 00:22:09.669 "malloc1", 00:22:09.669 "malloc2", 00:22:09.669 "malloc3" 00:22:09.669 ], 00:22:09.669 "superblock": false, 00:22:09.669 "strip_size_kb": 64, 00:22:09.669 "method": "bdev_raid_create", 00:22:09.669 "req_id": 1 00:22:09.669 } 00:22:09.669 Got JSON-RPC error response 00:22:09.669 response: 00:22:09.669 { 00:22:09.669 "code": -17, 00:22:09.670 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:09.670 } 00:22:09.670 05:02:33 -- common/autotest_common.sh@653 -- # es=1 00:22:09.670 05:02:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:09.670 05:02:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:09.670 05:02:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:09.670 05:02:33 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.670 05:02:33 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:09.956 05:02:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:09.956 05:02:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:09.956 05:02:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:10.220 [2024-11-18 05:02:33.585764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:10.220 [2024-11-18 05:02:33.586014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.220 [2024-11-18 05:02:33.586069] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:22:10.220 [2024-11-18 05:02:33.586087] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.220 [2024-11-18 05:02:33.588473] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.220 [2024-11-18 05:02:33.588515] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:10.220 [2024-11-18 05:02:33.588606] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:10.220 [2024-11-18 05:02:33.588662] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:10.220 pt1 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.220 05:02:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.479 05:02:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:10.479 "name": "raid_bdev1", 00:22:10.479 "uuid": "f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8", 00:22:10.479 "strip_size_kb": 64, 00:22:10.479 "state": "configuring", 00:22:10.479 "raid_level": "raid5f", 00:22:10.479 "superblock": true, 00:22:10.479 "num_base_bdevs": 3, 00:22:10.479 "num_base_bdevs_discovered": 1, 00:22:10.479 "num_base_bdevs_operational": 3, 00:22:10.479 "base_bdevs_list": [ 00:22:10.479 { 00:22:10.479 "name": "pt1", 00:22:10.479 "uuid": "98f6dcfb-78d8-533a-bfdd-bd6c187fb41d", 00:22:10.479 "is_configured": true, 00:22:10.479 "data_offset": 2048, 00:22:10.479 "data_size": 63488 00:22:10.479 }, 00:22:10.479 { 00:22:10.479 "name": null, 00:22:10.479 "uuid": "0a370bad-f32c-572e-832c-0547f6d3a33c", 00:22:10.479 "is_configured": false, 00:22:10.479 "data_offset": 2048, 00:22:10.479 "data_size": 63488 00:22:10.479 }, 00:22:10.479 { 00:22:10.479 "name": null, 00:22:10.479 "uuid": "a1238f83-0d04-5a44-bf2b-7ecb9e1aed08", 00:22:10.479 "is_configured": false, 00:22:10.479 "data_offset": 2048, 00:22:10.479 "data_size": 63488 00:22:10.479 } 00:22:10.479 ] 00:22:10.479 }' 00:22:10.479 05:02:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:10.479 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:22:10.738 05:02:34 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:22:10.738 05:02:34 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:10.997 [2024-11-18 05:02:34.285930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:10.997 [2024-11-18 05:02:34.286037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.997 [2024-11-18 05:02:34.286066] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:22:10.997 [2024-11-18 05:02:34.286082] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.997 [2024-11-18 05:02:34.286649] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.997 [2024-11-18 05:02:34.286683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:10.997 [2024-11-18 05:02:34.286776] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:10.997 [2024-11-18 05:02:34.286821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:10.997 pt2 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:10.997 [2024-11-18 05:02:34.481994] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.997 05:02:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.255 05:02:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.255 "name": "raid_bdev1", 00:22:11.255 "uuid": "f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8", 00:22:11.255 "strip_size_kb": 64, 00:22:11.255 "state": "configuring", 00:22:11.255 "raid_level": "raid5f", 00:22:11.255 "superblock": true, 00:22:11.255 "num_base_bdevs": 3, 00:22:11.255 "num_base_bdevs_discovered": 1, 00:22:11.255 "num_base_bdevs_operational": 3, 00:22:11.255 "base_bdevs_list": [ 00:22:11.255 { 00:22:11.255 "name": "pt1", 00:22:11.256 "uuid": "98f6dcfb-78d8-533a-bfdd-bd6c187fb41d", 00:22:11.256 "is_configured": true, 00:22:11.256 "data_offset": 2048, 00:22:11.256 "data_size": 63488 00:22:11.256 }, 00:22:11.256 { 00:22:11.256 "name": null, 00:22:11.256 "uuid": "0a370bad-f32c-572e-832c-0547f6d3a33c", 00:22:11.256 "is_configured": false, 00:22:11.256 "data_offset": 2048, 00:22:11.256 "data_size": 63488 00:22:11.256 }, 00:22:11.256 { 00:22:11.256 "name": null, 00:22:11.256 "uuid": "a1238f83-0d04-5a44-bf2b-7ecb9e1aed08", 00:22:11.256 "is_configured": false, 00:22:11.256 "data_offset": 2048, 00:22:11.256 "data_size": 63488 00:22:11.256 } 00:22:11.256 ] 00:22:11.256 }' 00:22:11.256 05:02:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.256 05:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:11.515 05:02:34 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:11.515 05:02:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:11.515 05:02:34 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:11.774 [2024-11-18 05:02:35.158146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:11.774 [2024-11-18 05:02:35.158454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.774 [2024-11-18 05:02:35.158524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:22:11.774 [2024-11-18 05:02:35.158645] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.774 [2024-11-18 05:02:35.159127] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.774 [2024-11-18 05:02:35.159305] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:11.774 [2024-11-18 05:02:35.159507] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:11.774 [2024-11-18 05:02:35.159666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:11.774 pt2 00:22:11.774 05:02:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:11.774 05:02:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:11.774 05:02:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:12.034 [2024-11-18 05:02:35.398200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:12.034 [2024-11-18 05:02:35.398310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.034 [2024-11-18 05:02:35.398337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:22:12.034 [2024-11-18 05:02:35.398349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.034 [2024-11-18 05:02:35.398808] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.034 [2024-11-18 05:02:35.398831] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:12.034 [2024-11-18 05:02:35.398920] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:12.034 [2024-11-18 05:02:35.398944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:12.034 [2024-11-18 05:02:35.399084] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:22:12.034 [2024-11-18 05:02:35.399098] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:12.034 [2024-11-18 05:02:35.399197] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:22:12.034 [2024-11-18 05:02:35.403442] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:22:12.034 [2024-11-18 05:02:35.403485] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:22:12.034 [2024-11-18 05:02:35.403677] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.034 pt3 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.034 05:02:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.293 05:02:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:12.293 "name": "raid_bdev1", 00:22:12.293 "uuid": "f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8", 00:22:12.293 "strip_size_kb": 64, 00:22:12.293 "state": "online", 00:22:12.293 "raid_level": "raid5f", 00:22:12.293 "superblock": true, 00:22:12.293 "num_base_bdevs": 3, 00:22:12.293 "num_base_bdevs_discovered": 3, 00:22:12.293 "num_base_bdevs_operational": 3, 00:22:12.293 "base_bdevs_list": [ 00:22:12.293 { 00:22:12.293 "name": "pt1", 00:22:12.293 "uuid": "98f6dcfb-78d8-533a-bfdd-bd6c187fb41d", 00:22:12.293 "is_configured": true, 00:22:12.293 "data_offset": 2048, 00:22:12.293 "data_size": 63488 00:22:12.293 }, 00:22:12.293 { 00:22:12.293 "name": "pt2", 00:22:12.293 "uuid": "0a370bad-f32c-572e-832c-0547f6d3a33c", 00:22:12.293 "is_configured": true, 00:22:12.293 "data_offset": 2048, 00:22:12.293 "data_size": 63488 00:22:12.293 }, 00:22:12.293 { 00:22:12.293 "name": "pt3", 00:22:12.293 "uuid": "a1238f83-0d04-5a44-bf2b-7ecb9e1aed08", 00:22:12.293 "is_configured": true, 00:22:12.293 "data_offset": 2048, 00:22:12.293 "data_size": 63488 00:22:12.293 } 00:22:12.293 ] 00:22:12.293 }' 00:22:12.293 05:02:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:12.293 05:02:35 -- common/autotest_common.sh@10 -- # set +x 00:22:12.553 05:02:35 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:12.553 05:02:35 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:22:12.812 [2024-11-18 05:02:36.104398] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@430 -- # '[' f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8 '!=' f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8 ']' 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:12.812 [2024-11-18 05:02:36.292273] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.812 05:02:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.071 05:02:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:13.071 "name": "raid_bdev1", 00:22:13.071 "uuid": "f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8", 00:22:13.071 "strip_size_kb": 64, 00:22:13.071 "state": "online", 00:22:13.071 "raid_level": "raid5f", 00:22:13.071 "superblock": true, 00:22:13.071 "num_base_bdevs": 3, 00:22:13.071 "num_base_bdevs_discovered": 2, 00:22:13.071 "num_base_bdevs_operational": 2, 00:22:13.071 "base_bdevs_list": [ 00:22:13.071 { 00:22:13.071 "name": null, 00:22:13.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.071 "is_configured": false, 00:22:13.071 "data_offset": 2048, 00:22:13.071 "data_size": 63488 00:22:13.071 }, 00:22:13.071 { 00:22:13.071 "name": "pt2", 00:22:13.071 "uuid": "0a370bad-f32c-572e-832c-0547f6d3a33c", 00:22:13.071 "is_configured": true, 00:22:13.071 "data_offset": 2048, 00:22:13.071 "data_size": 63488 00:22:13.071 }, 00:22:13.071 { 00:22:13.071 "name": "pt3", 00:22:13.071 "uuid": "a1238f83-0d04-5a44-bf2b-7ecb9e1aed08", 00:22:13.071 "is_configured": true, 00:22:13.071 "data_offset": 2048, 00:22:13.071 "data_size": 63488 00:22:13.071 } 00:22:13.071 ] 00:22:13.071 }' 00:22:13.071 05:02:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:13.071 05:02:36 -- common/autotest_common.sh@10 -- # set +x 00:22:13.330 05:02:36 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:13.589 [2024-11-18 05:02:37.056476] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:13.589 [2024-11-18 05:02:37.056509] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:13.589 [2024-11-18 05:02:37.056582] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:13.589 [2024-11-18 05:02:37.056644] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:13.589 [2024-11-18 05:02:37.056660] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:22:13.589 05:02:37 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.589 05:02:37 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:22:13.848 05:02:37 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:22:13.848 05:02:37 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:22:13.848 05:02:37 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:22:13.848 05:02:37 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:13.848 05:02:37 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:14.107 05:02:37 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:14.107 05:02:37 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:14.107 05:02:37 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:14.366 05:02:37 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:14.366 05:02:37 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:14.366 05:02:37 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:22:14.366 05:02:37 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:14.366 05:02:37 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:14.367 [2024-11-18 05:02:37.832577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:14.367 [2024-11-18 05:02:37.832680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.367 [2024-11-18 05:02:37.832705] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:22:14.367 [2024-11-18 05:02:37.832721] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.367 [2024-11-18 05:02:37.835044] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.367 [2024-11-18 05:02:37.835115] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:14.367 [2024-11-18 05:02:37.835233] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:14.367 [2024-11-18 05:02:37.835289] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:14.367 pt2 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.367 05:02:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.626 05:02:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.626 "name": "raid_bdev1", 00:22:14.626 "uuid": "f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8", 00:22:14.626 "strip_size_kb": 64, 00:22:14.626 "state": "configuring", 00:22:14.626 "raid_level": "raid5f", 00:22:14.626 "superblock": true, 00:22:14.626 "num_base_bdevs": 3, 00:22:14.626 "num_base_bdevs_discovered": 1, 00:22:14.626 "num_base_bdevs_operational": 2, 00:22:14.626 "base_bdevs_list": [ 00:22:14.626 { 00:22:14.626 "name": null, 00:22:14.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.626 "is_configured": false, 00:22:14.626 "data_offset": 2048, 00:22:14.626 "data_size": 63488 00:22:14.626 }, 00:22:14.626 { 00:22:14.626 "name": "pt2", 00:22:14.626 "uuid": "0a370bad-f32c-572e-832c-0547f6d3a33c", 00:22:14.626 "is_configured": true, 00:22:14.626 "data_offset": 2048, 00:22:14.626 "data_size": 63488 00:22:14.626 }, 00:22:14.626 { 00:22:14.626 "name": null, 00:22:14.626 "uuid": "a1238f83-0d04-5a44-bf2b-7ecb9e1aed08", 00:22:14.626 "is_configured": false, 00:22:14.626 "data_offset": 2048, 00:22:14.626 "data_size": 63488 00:22:14.626 } 00:22:14.626 ] 00:22:14.626 }' 00:22:14.626 05:02:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.626 05:02:38 -- common/autotest_common.sh@10 -- # set +x 00:22:14.885 05:02:38 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:14.885 05:02:38 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:14.885 05:02:38 -- bdev/bdev_raid.sh@462 -- # i=2 00:22:14.885 05:02:38 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:15.145 [2024-11-18 05:02:38.552811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:15.145 [2024-11-18 05:02:38.552893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.145 [2024-11-18 05:02:38.552922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:22:15.145 [2024-11-18 05:02:38.552936] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.145 [2024-11-18 05:02:38.553437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.145 [2024-11-18 05:02:38.553476] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:15.145 [2024-11-18 05:02:38.553593] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:15.145 [2024-11-18 05:02:38.553637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:15.145 [2024-11-18 05:02:38.553757] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:22:15.145 [2024-11-18 05:02:38.553774] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:15.145 [2024-11-18 05:02:38.553856] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:22:15.145 [2024-11-18 05:02:38.558120] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:22:15.145 [2024-11-18 05:02:38.558146] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:22:15.145 [2024-11-18 05:02:38.558475] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.145 pt3 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.145 05:02:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.404 05:02:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:15.404 "name": "raid_bdev1", 00:22:15.404 "uuid": "f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8", 00:22:15.404 "strip_size_kb": 64, 00:22:15.404 "state": "online", 00:22:15.404 "raid_level": "raid5f", 00:22:15.404 "superblock": true, 00:22:15.404 "num_base_bdevs": 3, 00:22:15.404 "num_base_bdevs_discovered": 2, 00:22:15.404 "num_base_bdevs_operational": 2, 00:22:15.404 "base_bdevs_list": [ 00:22:15.404 { 00:22:15.404 "name": null, 00:22:15.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.404 "is_configured": false, 00:22:15.404 "data_offset": 2048, 00:22:15.404 "data_size": 63488 00:22:15.404 }, 00:22:15.404 { 00:22:15.404 "name": "pt2", 00:22:15.404 "uuid": "0a370bad-f32c-572e-832c-0547f6d3a33c", 00:22:15.404 "is_configured": true, 00:22:15.404 "data_offset": 2048, 00:22:15.404 "data_size": 63488 00:22:15.404 }, 00:22:15.404 { 00:22:15.404 "name": "pt3", 00:22:15.404 "uuid": "a1238f83-0d04-5a44-bf2b-7ecb9e1aed08", 00:22:15.404 "is_configured": true, 00:22:15.404 "data_offset": 2048, 00:22:15.404 "data_size": 63488 00:22:15.404 } 00:22:15.404 ] 00:22:15.404 }' 00:22:15.404 05:02:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:15.404 05:02:38 -- common/autotest_common.sh@10 -- # set +x 00:22:15.664 05:02:39 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:22:15.664 05:02:39 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:15.922 [2024-11-18 05:02:39.326950] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.922 [2024-11-18 05:02:39.327002] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.922 [2024-11-18 05:02:39.327086] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.922 [2024-11-18 05:02:39.327150] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.922 [2024-11-18 05:02:39.327163] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:22:15.922 05:02:39 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.922 05:02:39 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:22:16.182 05:02:39 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:22:16.182 05:02:39 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:22:16.182 05:02:39 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:16.441 [2024-11-18 05:02:39.739022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:16.441 [2024-11-18 05:02:39.739096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.441 [2024-11-18 05:02:39.739124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:22:16.441 [2024-11-18 05:02:39.739136] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.441 [2024-11-18 05:02:39.741327] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.441 [2024-11-18 05:02:39.741381] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:16.441 [2024-11-18 05:02:39.741473] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:16.441 [2024-11-18 05:02:39.741518] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:16.441 pt1 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:16.441 "name": "raid_bdev1", 00:22:16.441 "uuid": "f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8", 00:22:16.441 "strip_size_kb": 64, 00:22:16.441 "state": "configuring", 00:22:16.441 "raid_level": "raid5f", 00:22:16.441 "superblock": true, 00:22:16.441 "num_base_bdevs": 3, 00:22:16.441 "num_base_bdevs_discovered": 1, 00:22:16.441 "num_base_bdevs_operational": 3, 00:22:16.441 "base_bdevs_list": [ 00:22:16.441 { 00:22:16.441 "name": "pt1", 00:22:16.441 "uuid": "98f6dcfb-78d8-533a-bfdd-bd6c187fb41d", 00:22:16.441 "is_configured": true, 00:22:16.441 "data_offset": 2048, 00:22:16.441 "data_size": 63488 00:22:16.441 }, 00:22:16.441 { 00:22:16.441 "name": null, 00:22:16.441 "uuid": "0a370bad-f32c-572e-832c-0547f6d3a33c", 00:22:16.441 "is_configured": false, 00:22:16.441 "data_offset": 2048, 00:22:16.441 "data_size": 63488 00:22:16.441 }, 00:22:16.441 { 00:22:16.441 "name": null, 00:22:16.441 "uuid": "a1238f83-0d04-5a44-bf2b-7ecb9e1aed08", 00:22:16.441 "is_configured": false, 00:22:16.441 "data_offset": 2048, 00:22:16.441 "data_size": 63488 00:22:16.441 } 00:22:16.441 ] 00:22:16.441 }' 00:22:16.441 05:02:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:16.441 05:02:39 -- common/autotest_common.sh@10 -- # set +x 00:22:16.701 05:02:40 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:22:16.701 05:02:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:16.701 05:02:40 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:16.960 05:02:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:16.960 05:02:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:16.960 05:02:40 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:17.219 05:02:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:17.219 05:02:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:17.219 05:02:40 -- bdev/bdev_raid.sh@489 -- # i=2 00:22:17.219 05:02:40 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:17.479 [2024-11-18 05:02:40.795305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:17.479 [2024-11-18 05:02:40.795379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.479 [2024-11-18 05:02:40.795406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:22:17.479 [2024-11-18 05:02:40.795419] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.479 [2024-11-18 05:02:40.795905] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.479 [2024-11-18 05:02:40.795929] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:17.479 [2024-11-18 05:02:40.796035] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:17.479 [2024-11-18 05:02:40.796050] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:17.479 [2024-11-18 05:02:40.796064] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.479 [2024-11-18 05:02:40.796102] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b780 name raid_bdev1, state configuring 00:22:17.479 [2024-11-18 05:02:40.796166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:17.479 pt3 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:17.479 "name": "raid_bdev1", 00:22:17.479 "uuid": "f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8", 00:22:17.479 "strip_size_kb": 64, 00:22:17.479 "state": "configuring", 00:22:17.479 "raid_level": "raid5f", 00:22:17.479 "superblock": true, 00:22:17.479 "num_base_bdevs": 3, 00:22:17.479 "num_base_bdevs_discovered": 1, 00:22:17.479 "num_base_bdevs_operational": 2, 00:22:17.479 "base_bdevs_list": [ 00:22:17.479 { 00:22:17.479 "name": null, 00:22:17.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.479 "is_configured": false, 00:22:17.479 "data_offset": 2048, 00:22:17.479 "data_size": 63488 00:22:17.479 }, 00:22:17.479 { 00:22:17.479 "name": null, 00:22:17.479 "uuid": "0a370bad-f32c-572e-832c-0547f6d3a33c", 00:22:17.479 "is_configured": false, 00:22:17.479 "data_offset": 2048, 00:22:17.479 "data_size": 63488 00:22:17.479 }, 00:22:17.479 { 00:22:17.479 "name": "pt3", 00:22:17.479 "uuid": "a1238f83-0d04-5a44-bf2b-7ecb9e1aed08", 00:22:17.479 "is_configured": true, 00:22:17.479 "data_offset": 2048, 00:22:17.479 "data_size": 63488 00:22:17.479 } 00:22:17.479 ] 00:22:17.479 }' 00:22:17.479 05:02:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:17.479 05:02:40 -- common/autotest_common.sh@10 -- # set +x 00:22:17.738 05:02:41 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:22:17.738 05:02:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:17.739 05:02:41 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:17.998 [2024-11-18 05:02:41.415451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:17.998 [2024-11-18 05:02:41.415536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.998 [2024-11-18 05:02:41.415563] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:22:17.998 [2024-11-18 05:02:41.415577] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.998 [2024-11-18 05:02:41.416039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.998 [2024-11-18 05:02:41.416072] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:17.998 [2024-11-18 05:02:41.416170] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:17.998 [2024-11-18 05:02:41.416235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.998 [2024-11-18 05:02:41.416356] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:22:17.998 [2024-11-18 05:02:41.416375] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:17.998 [2024-11-18 05:02:41.416460] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:22:17.998 [2024-11-18 05:02:41.420395] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:22:17.998 [2024-11-18 05:02:41.420419] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:22:17.998 [2024-11-18 05:02:41.420718] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.998 pt2 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.998 05:02:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.257 05:02:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.257 "name": "raid_bdev1", 00:22:18.257 "uuid": "f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8", 00:22:18.257 "strip_size_kb": 64, 00:22:18.257 "state": "online", 00:22:18.257 "raid_level": "raid5f", 00:22:18.257 "superblock": true, 00:22:18.257 "num_base_bdevs": 3, 00:22:18.257 "num_base_bdevs_discovered": 2, 00:22:18.257 "num_base_bdevs_operational": 2, 00:22:18.257 "base_bdevs_list": [ 00:22:18.257 { 00:22:18.257 "name": null, 00:22:18.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.257 "is_configured": false, 00:22:18.257 "data_offset": 2048, 00:22:18.257 "data_size": 63488 00:22:18.257 }, 00:22:18.257 { 00:22:18.257 "name": "pt2", 00:22:18.257 "uuid": "0a370bad-f32c-572e-832c-0547f6d3a33c", 00:22:18.257 "is_configured": true, 00:22:18.257 "data_offset": 2048, 00:22:18.257 "data_size": 63488 00:22:18.257 }, 00:22:18.257 { 00:22:18.257 "name": "pt3", 00:22:18.257 "uuid": "a1238f83-0d04-5a44-bf2b-7ecb9e1aed08", 00:22:18.257 "is_configured": true, 00:22:18.257 "data_offset": 2048, 00:22:18.257 "data_size": 63488 00:22:18.257 } 00:22:18.257 ] 00:22:18.257 }' 00:22:18.257 05:02:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.257 05:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:18.516 05:02:41 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:18.516 05:02:41 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:22:18.776 [2024-11-18 05:02:42.161326] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:18.776 05:02:42 -- bdev/bdev_raid.sh@506 -- # '[' f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8 '!=' f8875ff7-dd9f-4c6e-a32d-3cca4aed0ee8 ']' 00:22:18.776 05:02:42 -- bdev/bdev_raid.sh@511 -- # killprocess 83034 00:22:18.776 05:02:42 -- common/autotest_common.sh@936 -- # '[' -z 83034 ']' 00:22:18.776 05:02:42 -- common/autotest_common.sh@940 -- # kill -0 83034 00:22:18.776 05:02:42 -- common/autotest_common.sh@941 -- # uname 00:22:18.776 05:02:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:18.776 05:02:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83034 00:22:18.776 killing process with pid 83034 00:22:18.776 05:02:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:18.776 05:02:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:18.776 05:02:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83034' 00:22:18.776 05:02:42 -- common/autotest_common.sh@955 -- # kill 83034 00:22:18.776 [2024-11-18 05:02:42.209802] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:18.776 [2024-11-18 05:02:42.209865] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.776 05:02:42 -- common/autotest_common.sh@960 -- # wait 83034 00:22:18.776 [2024-11-18 05:02:42.209952] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.776 [2024-11-18 05:02:42.209983] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:22:19.036 [2024-11-18 05:02:42.397513] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@513 -- # return 0 00:22:19.972 00:22:19.972 real 0m14.690s 00:22:19.972 user 0m25.300s 00:22:19.972 sys 0m2.274s 00:22:19.972 ************************************ 00:22:19.972 END TEST raid5f_superblock_test 00:22:19.972 ************************************ 00:22:19.972 05:02:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:19.972 05:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:22:19.972 05:02:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:19.972 05:02:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:19.972 05:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:19.972 ************************************ 00:22:19.972 START TEST raid5f_rebuild_test 00:22:19.972 ************************************ 00:22:19.972 05:02:43 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 false false 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:22:19.972 05:02:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@544 -- # raid_pid=83561 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@545 -- # waitforlisten 83561 /var/tmp/spdk-raid.sock 00:22:19.973 05:02:43 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:19.973 05:02:43 -- common/autotest_common.sh@829 -- # '[' -z 83561 ']' 00:22:19.973 05:02:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:19.973 05:02:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.973 05:02:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:19.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:19.973 05:02:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.973 05:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:19.973 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:19.973 Zero copy mechanism will not be used. 00:22:19.973 [2024-11-18 05:02:43.422851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:19.973 [2024-11-18 05:02:43.422998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83561 ] 00:22:20.232 [2024-11-18 05:02:43.576009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.232 [2024-11-18 05:02:43.725791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.491 [2024-11-18 05:02:43.867885] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:21.059 05:02:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.059 05:02:44 -- common/autotest_common.sh@862 -- # return 0 00:22:21.059 05:02:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:21.059 05:02:44 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:21.059 05:02:44 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:21.318 BaseBdev1 00:22:21.318 05:02:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:21.318 05:02:44 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:21.318 05:02:44 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:21.578 BaseBdev2 00:22:21.578 05:02:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:21.578 05:02:44 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:21.578 05:02:44 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:21.578 BaseBdev3 00:22:21.578 05:02:45 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:21.837 spare_malloc 00:22:21.837 05:02:45 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:22.096 spare_delay 00:22:22.096 05:02:45 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:22.355 [2024-11-18 05:02:45.666535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:22.355 [2024-11-18 05:02:45.666790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.355 [2024-11-18 05:02:45.666825] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:22:22.355 [2024-11-18 05:02:45.666842] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.355 [2024-11-18 05:02:45.669070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.355 [2024-11-18 05:02:45.669112] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:22.355 spare 00:22:22.355 05:02:45 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:22:22.355 [2024-11-18 05:02:45.846617] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.355 [2024-11-18 05:02:45.848576] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:22.355 [2024-11-18 05:02:45.848746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:22.355 [2024-11-18 05:02:45.848869] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:22:22.355 [2024-11-18 05:02:45.849015] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:22.355 [2024-11-18 05:02:45.849292] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:22:22.355 [2024-11-18 05:02:45.853566] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:22:22.356 [2024-11-18 05:02:45.853591] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:22:22.356 [2024-11-18 05:02:45.853775] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.356 05:02:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.615 05:02:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.615 "name": "raid_bdev1", 00:22:22.615 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:22.615 "strip_size_kb": 64, 00:22:22.615 "state": "online", 00:22:22.615 "raid_level": "raid5f", 00:22:22.615 "superblock": false, 00:22:22.615 "num_base_bdevs": 3, 00:22:22.615 "num_base_bdevs_discovered": 3, 00:22:22.615 "num_base_bdevs_operational": 3, 00:22:22.615 "base_bdevs_list": [ 00:22:22.615 { 00:22:22.615 "name": "BaseBdev1", 00:22:22.615 "uuid": "5edc4afd-a695-4e67-b667-c204b4ca4918", 00:22:22.615 "is_configured": true, 00:22:22.615 "data_offset": 0, 00:22:22.615 "data_size": 65536 00:22:22.615 }, 00:22:22.615 { 00:22:22.615 "name": "BaseBdev2", 00:22:22.615 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:22.615 "is_configured": true, 00:22:22.615 "data_offset": 0, 00:22:22.615 "data_size": 65536 00:22:22.615 }, 00:22:22.615 { 00:22:22.615 "name": "BaseBdev3", 00:22:22.615 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:22.615 "is_configured": true, 00:22:22.615 "data_offset": 0, 00:22:22.615 "data_size": 65536 00:22:22.615 } 00:22:22.615 ] 00:22:22.615 }' 00:22:22.615 05:02:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.615 05:02:46 -- common/autotest_common.sh@10 -- # set +x 00:22:22.874 05:02:46 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:22.874 05:02:46 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:23.133 [2024-11-18 05:02:46.546586] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:23.133 05:02:46 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:22:23.133 05:02:46 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.133 05:02:46 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:23.392 05:02:46 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:23.392 05:02:46 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:23.392 05:02:46 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:23.392 05:02:46 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:23.392 05:02:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:23.392 05:02:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:23.392 05:02:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:23.392 05:02:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:23.392 05:02:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:23.392 05:02:46 -- bdev/nbd_common.sh@12 -- # local i 00:22:23.392 05:02:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:23.392 05:02:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:23.392 05:02:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:23.651 [2024-11-18 05:02:46.970579] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:22:23.651 /dev/nbd0 00:22:23.651 05:02:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:23.651 05:02:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:23.651 05:02:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:23.651 05:02:47 -- common/autotest_common.sh@867 -- # local i 00:22:23.651 05:02:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:23.651 05:02:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:23.651 05:02:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:23.651 05:02:47 -- common/autotest_common.sh@871 -- # break 00:22:23.651 05:02:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:23.651 05:02:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:23.651 05:02:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.651 1+0 records in 00:22:23.651 1+0 records out 00:22:23.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206098 s, 19.9 MB/s 00:22:23.651 05:02:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.651 05:02:47 -- common/autotest_common.sh@884 -- # size=4096 00:22:23.651 05:02:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.651 05:02:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:23.651 05:02:47 -- common/autotest_common.sh@887 -- # return 0 00:22:23.651 05:02:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.651 05:02:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:23.651 05:02:47 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:23.651 05:02:47 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:22:23.651 05:02:47 -- bdev/bdev_raid.sh@582 -- # echo 128 00:22:23.651 05:02:47 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:22:23.910 512+0 records in 00:22:23.910 512+0 records out 00:22:23.910 67108864 bytes (67 MB, 64 MiB) copied, 0.363585 s, 185 MB/s 00:22:23.910 05:02:47 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:23.910 05:02:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:23.910 05:02:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:23.910 05:02:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:23.910 05:02:47 -- bdev/nbd_common.sh@51 -- # local i 00:22:23.910 05:02:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:23.910 05:02:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:24.169 05:02:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:24.169 05:02:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:24.169 05:02:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:24.169 05:02:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.169 05:02:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.169 05:02:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:24.169 [2024-11-18 05:02:47.592697] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.169 05:02:47 -- bdev/nbd_common.sh@41 -- # break 00:22:24.169 05:02:47 -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.169 05:02:47 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:24.428 [2024-11-18 05:02:47.834346] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.428 05:02:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.686 05:02:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:24.686 "name": "raid_bdev1", 00:22:24.686 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:24.686 "strip_size_kb": 64, 00:22:24.686 "state": "online", 00:22:24.686 "raid_level": "raid5f", 00:22:24.686 "superblock": false, 00:22:24.686 "num_base_bdevs": 3, 00:22:24.686 "num_base_bdevs_discovered": 2, 00:22:24.686 "num_base_bdevs_operational": 2, 00:22:24.686 "base_bdevs_list": [ 00:22:24.686 { 00:22:24.686 "name": null, 00:22:24.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.686 "is_configured": false, 00:22:24.686 "data_offset": 0, 00:22:24.686 "data_size": 65536 00:22:24.686 }, 00:22:24.686 { 00:22:24.686 "name": "BaseBdev2", 00:22:24.686 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:24.686 "is_configured": true, 00:22:24.686 "data_offset": 0, 00:22:24.686 "data_size": 65536 00:22:24.686 }, 00:22:24.686 { 00:22:24.686 "name": "BaseBdev3", 00:22:24.686 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:24.686 "is_configured": true, 00:22:24.686 "data_offset": 0, 00:22:24.686 "data_size": 65536 00:22:24.686 } 00:22:24.686 ] 00:22:24.686 }' 00:22:24.686 05:02:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:24.686 05:02:48 -- common/autotest_common.sh@10 -- # set +x 00:22:24.946 05:02:48 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:25.205 [2024-11-18 05:02:48.562616] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:25.205 [2024-11-18 05:02:48.562799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:25.205 [2024-11-18 05:02:48.573413] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002af30 00:22:25.205 [2024-11-18 05:02:48.579151] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:25.205 05:02:48 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:26.142 05:02:49 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.142 05:02:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.142 05:02:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:26.142 05:02:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:26.142 05:02:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.142 05:02:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.142 05:02:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.401 05:02:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:26.401 "name": "raid_bdev1", 00:22:26.401 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:26.401 "strip_size_kb": 64, 00:22:26.401 "state": "online", 00:22:26.401 "raid_level": "raid5f", 00:22:26.401 "superblock": false, 00:22:26.401 "num_base_bdevs": 3, 00:22:26.401 "num_base_bdevs_discovered": 3, 00:22:26.401 "num_base_bdevs_operational": 3, 00:22:26.401 "process": { 00:22:26.401 "type": "rebuild", 00:22:26.401 "target": "spare", 00:22:26.401 "progress": { 00:22:26.401 "blocks": 24576, 00:22:26.401 "percent": 18 00:22:26.401 } 00:22:26.401 }, 00:22:26.401 "base_bdevs_list": [ 00:22:26.401 { 00:22:26.401 "name": "spare", 00:22:26.401 "uuid": "5c75a7fa-e975-559a-bfd0-9f9dd4b35807", 00:22:26.401 "is_configured": true, 00:22:26.401 "data_offset": 0, 00:22:26.401 "data_size": 65536 00:22:26.401 }, 00:22:26.401 { 00:22:26.401 "name": "BaseBdev2", 00:22:26.401 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:26.401 "is_configured": true, 00:22:26.401 "data_offset": 0, 00:22:26.401 "data_size": 65536 00:22:26.401 }, 00:22:26.401 { 00:22:26.401 "name": "BaseBdev3", 00:22:26.401 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:26.401 "is_configured": true, 00:22:26.401 "data_offset": 0, 00:22:26.401 "data_size": 65536 00:22:26.401 } 00:22:26.401 ] 00:22:26.401 }' 00:22:26.401 05:02:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:26.401 05:02:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.401 05:02:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:26.401 05:02:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.401 05:02:49 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:26.660 [2024-11-18 05:02:50.060164] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:26.660 [2024-11-18 05:02:50.090313] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:26.660 [2024-11-18 05:02:50.090567] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.660 05:02:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.920 05:02:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.920 "name": "raid_bdev1", 00:22:26.920 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:26.920 "strip_size_kb": 64, 00:22:26.920 "state": "online", 00:22:26.920 "raid_level": "raid5f", 00:22:26.920 "superblock": false, 00:22:26.920 "num_base_bdevs": 3, 00:22:26.920 "num_base_bdevs_discovered": 2, 00:22:26.920 "num_base_bdevs_operational": 2, 00:22:26.920 "base_bdevs_list": [ 00:22:26.920 { 00:22:26.920 "name": null, 00:22:26.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.920 "is_configured": false, 00:22:26.920 "data_offset": 0, 00:22:26.920 "data_size": 65536 00:22:26.920 }, 00:22:26.920 { 00:22:26.920 "name": "BaseBdev2", 00:22:26.920 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:26.920 "is_configured": true, 00:22:26.920 "data_offset": 0, 00:22:26.920 "data_size": 65536 00:22:26.920 }, 00:22:26.920 { 00:22:26.920 "name": "BaseBdev3", 00:22:26.920 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:26.920 "is_configured": true, 00:22:26.920 "data_offset": 0, 00:22:26.920 "data_size": 65536 00:22:26.920 } 00:22:26.920 ] 00:22:26.920 }' 00:22:26.920 05:02:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.920 05:02:50 -- common/autotest_common.sh@10 -- # set +x 00:22:27.179 05:02:50 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:27.179 05:02:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:27.179 05:02:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:27.179 05:02:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:27.179 05:02:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:27.179 05:02:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.179 05:02:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.438 05:02:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.438 "name": "raid_bdev1", 00:22:27.438 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:27.438 "strip_size_kb": 64, 00:22:27.438 "state": "online", 00:22:27.438 "raid_level": "raid5f", 00:22:27.438 "superblock": false, 00:22:27.438 "num_base_bdevs": 3, 00:22:27.438 "num_base_bdevs_discovered": 2, 00:22:27.438 "num_base_bdevs_operational": 2, 00:22:27.438 "base_bdevs_list": [ 00:22:27.438 { 00:22:27.438 "name": null, 00:22:27.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.438 "is_configured": false, 00:22:27.438 "data_offset": 0, 00:22:27.438 "data_size": 65536 00:22:27.438 }, 00:22:27.438 { 00:22:27.438 "name": "BaseBdev2", 00:22:27.438 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:27.438 "is_configured": true, 00:22:27.438 "data_offset": 0, 00:22:27.438 "data_size": 65536 00:22:27.438 }, 00:22:27.438 { 00:22:27.438 "name": "BaseBdev3", 00:22:27.438 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:27.438 "is_configured": true, 00:22:27.438 "data_offset": 0, 00:22:27.438 "data_size": 65536 00:22:27.438 } 00:22:27.438 ] 00:22:27.438 }' 00:22:27.438 05:02:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:27.438 05:02:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:27.438 05:02:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:27.438 05:02:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:27.438 05:02:50 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:27.697 [2024-11-18 05:02:51.169329] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:27.697 [2024-11-18 05:02:51.169538] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:27.697 [2024-11-18 05:02:51.179469] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b000 00:22:27.697 [2024-11-18 05:02:51.185227] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:27.697 05:02:51 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:29.076 "name": "raid_bdev1", 00:22:29.076 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:29.076 "strip_size_kb": 64, 00:22:29.076 "state": "online", 00:22:29.076 "raid_level": "raid5f", 00:22:29.076 "superblock": false, 00:22:29.076 "num_base_bdevs": 3, 00:22:29.076 "num_base_bdevs_discovered": 3, 00:22:29.076 "num_base_bdevs_operational": 3, 00:22:29.076 "process": { 00:22:29.076 "type": "rebuild", 00:22:29.076 "target": "spare", 00:22:29.076 "progress": { 00:22:29.076 "blocks": 24576, 00:22:29.076 "percent": 18 00:22:29.076 } 00:22:29.076 }, 00:22:29.076 "base_bdevs_list": [ 00:22:29.076 { 00:22:29.076 "name": "spare", 00:22:29.076 "uuid": "5c75a7fa-e975-559a-bfd0-9f9dd4b35807", 00:22:29.076 "is_configured": true, 00:22:29.076 "data_offset": 0, 00:22:29.076 "data_size": 65536 00:22:29.076 }, 00:22:29.076 { 00:22:29.076 "name": "BaseBdev2", 00:22:29.076 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:29.076 "is_configured": true, 00:22:29.076 "data_offset": 0, 00:22:29.076 "data_size": 65536 00:22:29.076 }, 00:22:29.076 { 00:22:29.076 "name": "BaseBdev3", 00:22:29.076 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:29.076 "is_configured": true, 00:22:29.076 "data_offset": 0, 00:22:29.076 "data_size": 65536 00:22:29.076 } 00:22:29.076 ] 00:22:29.076 }' 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@657 -- # local timeout=546 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.076 05:02:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.335 05:02:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:29.335 "name": "raid_bdev1", 00:22:29.335 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:29.335 "strip_size_kb": 64, 00:22:29.335 "state": "online", 00:22:29.335 "raid_level": "raid5f", 00:22:29.335 "superblock": false, 00:22:29.335 "num_base_bdevs": 3, 00:22:29.335 "num_base_bdevs_discovered": 3, 00:22:29.335 "num_base_bdevs_operational": 3, 00:22:29.335 "process": { 00:22:29.335 "type": "rebuild", 00:22:29.335 "target": "spare", 00:22:29.335 "progress": { 00:22:29.335 "blocks": 28672, 00:22:29.335 "percent": 21 00:22:29.335 } 00:22:29.335 }, 00:22:29.335 "base_bdevs_list": [ 00:22:29.335 { 00:22:29.335 "name": "spare", 00:22:29.335 "uuid": "5c75a7fa-e975-559a-bfd0-9f9dd4b35807", 00:22:29.335 "is_configured": true, 00:22:29.335 "data_offset": 0, 00:22:29.335 "data_size": 65536 00:22:29.335 }, 00:22:29.335 { 00:22:29.335 "name": "BaseBdev2", 00:22:29.335 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:29.335 "is_configured": true, 00:22:29.335 "data_offset": 0, 00:22:29.335 "data_size": 65536 00:22:29.335 }, 00:22:29.335 { 00:22:29.335 "name": "BaseBdev3", 00:22:29.335 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:29.335 "is_configured": true, 00:22:29.335 "data_offset": 0, 00:22:29.335 "data_size": 65536 00:22:29.335 } 00:22:29.335 ] 00:22:29.335 }' 00:22:29.335 05:02:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:29.335 05:02:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.335 05:02:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:29.336 05:02:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.336 05:02:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:30.273 05:02:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:30.273 05:02:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.273 05:02:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.273 05:02:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:30.273 05:02:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:30.273 05:02:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.273 05:02:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.273 05:02:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.532 05:02:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.532 "name": "raid_bdev1", 00:22:30.532 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:30.532 "strip_size_kb": 64, 00:22:30.532 "state": "online", 00:22:30.532 "raid_level": "raid5f", 00:22:30.532 "superblock": false, 00:22:30.532 "num_base_bdevs": 3, 00:22:30.532 "num_base_bdevs_discovered": 3, 00:22:30.532 "num_base_bdevs_operational": 3, 00:22:30.532 "process": { 00:22:30.532 "type": "rebuild", 00:22:30.532 "target": "spare", 00:22:30.532 "progress": { 00:22:30.532 "blocks": 55296, 00:22:30.532 "percent": 42 00:22:30.532 } 00:22:30.532 }, 00:22:30.532 "base_bdevs_list": [ 00:22:30.532 { 00:22:30.532 "name": "spare", 00:22:30.532 "uuid": "5c75a7fa-e975-559a-bfd0-9f9dd4b35807", 00:22:30.532 "is_configured": true, 00:22:30.532 "data_offset": 0, 00:22:30.532 "data_size": 65536 00:22:30.532 }, 00:22:30.532 { 00:22:30.532 "name": "BaseBdev2", 00:22:30.532 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:30.532 "is_configured": true, 00:22:30.532 "data_offset": 0, 00:22:30.532 "data_size": 65536 00:22:30.532 }, 00:22:30.532 { 00:22:30.532 "name": "BaseBdev3", 00:22:30.532 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:30.532 "is_configured": true, 00:22:30.532 "data_offset": 0, 00:22:30.532 "data_size": 65536 00:22:30.532 } 00:22:30.532 ] 00:22:30.532 }' 00:22:30.532 05:02:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.532 05:02:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.532 05:02:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.532 05:02:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.532 05:02:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:31.910 05:02:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:31.910 05:02:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.910 05:02:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.910 05:02:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.910 05:02:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.910 05:02:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.910 05:02:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.910 05:02:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.910 05:02:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.910 "name": "raid_bdev1", 00:22:31.910 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:31.910 "strip_size_kb": 64, 00:22:31.910 "state": "online", 00:22:31.910 "raid_level": "raid5f", 00:22:31.910 "superblock": false, 00:22:31.910 "num_base_bdevs": 3, 00:22:31.910 "num_base_bdevs_discovered": 3, 00:22:31.910 "num_base_bdevs_operational": 3, 00:22:31.910 "process": { 00:22:31.910 "type": "rebuild", 00:22:31.910 "target": "spare", 00:22:31.910 "progress": { 00:22:31.910 "blocks": 79872, 00:22:31.910 "percent": 60 00:22:31.910 } 00:22:31.910 }, 00:22:31.910 "base_bdevs_list": [ 00:22:31.910 { 00:22:31.910 "name": "spare", 00:22:31.910 "uuid": "5c75a7fa-e975-559a-bfd0-9f9dd4b35807", 00:22:31.910 "is_configured": true, 00:22:31.910 "data_offset": 0, 00:22:31.910 "data_size": 65536 00:22:31.911 }, 00:22:31.911 { 00:22:31.911 "name": "BaseBdev2", 00:22:31.911 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:31.911 "is_configured": true, 00:22:31.911 "data_offset": 0, 00:22:31.911 "data_size": 65536 00:22:31.911 }, 00:22:31.911 { 00:22:31.911 "name": "BaseBdev3", 00:22:31.911 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:31.911 "is_configured": true, 00:22:31.911 "data_offset": 0, 00:22:31.911 "data_size": 65536 00:22:31.911 } 00:22:31.911 ] 00:22:31.911 }' 00:22:31.911 05:02:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.911 05:02:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.911 05:02:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:31.911 05:02:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.911 05:02:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:32.847 05:02:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:32.847 05:02:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.847 05:02:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.847 05:02:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:32.847 05:02:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:32.847 05:02:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.847 05:02:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.848 05:02:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.107 05:02:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.107 "name": "raid_bdev1", 00:22:33.107 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:33.107 "strip_size_kb": 64, 00:22:33.107 "state": "online", 00:22:33.107 "raid_level": "raid5f", 00:22:33.107 "superblock": false, 00:22:33.107 "num_base_bdevs": 3, 00:22:33.107 "num_base_bdevs_discovered": 3, 00:22:33.107 "num_base_bdevs_operational": 3, 00:22:33.107 "process": { 00:22:33.107 "type": "rebuild", 00:22:33.107 "target": "spare", 00:22:33.107 "progress": { 00:22:33.107 "blocks": 106496, 00:22:33.107 "percent": 81 00:22:33.107 } 00:22:33.107 }, 00:22:33.107 "base_bdevs_list": [ 00:22:33.107 { 00:22:33.107 "name": "spare", 00:22:33.107 "uuid": "5c75a7fa-e975-559a-bfd0-9f9dd4b35807", 00:22:33.107 "is_configured": true, 00:22:33.107 "data_offset": 0, 00:22:33.107 "data_size": 65536 00:22:33.107 }, 00:22:33.107 { 00:22:33.107 "name": "BaseBdev2", 00:22:33.107 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:33.107 "is_configured": true, 00:22:33.107 "data_offset": 0, 00:22:33.107 "data_size": 65536 00:22:33.107 }, 00:22:33.107 { 00:22:33.107 "name": "BaseBdev3", 00:22:33.107 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:33.107 "is_configured": true, 00:22:33.107 "data_offset": 0, 00:22:33.107 "data_size": 65536 00:22:33.107 } 00:22:33.107 ] 00:22:33.107 }' 00:22:33.107 05:02:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.107 05:02:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.107 05:02:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.107 05:02:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.107 05:02:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:34.043 05:02:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:34.043 05:02:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.043 05:02:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.043 05:02:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:34.043 05:02:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:34.043 05:02:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.043 05:02:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.043 05:02:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.303 [2024-11-18 05:02:57.631069] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:34.303 [2024-11-18 05:02:57.631145] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:34.303 [2024-11-18 05:02:57.631239] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.303 "name": "raid_bdev1", 00:22:34.303 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:34.303 "strip_size_kb": 64, 00:22:34.303 "state": "online", 00:22:34.303 "raid_level": "raid5f", 00:22:34.303 "superblock": false, 00:22:34.303 "num_base_bdevs": 3, 00:22:34.303 "num_base_bdevs_discovered": 3, 00:22:34.303 "num_base_bdevs_operational": 3, 00:22:34.303 "base_bdevs_list": [ 00:22:34.303 { 00:22:34.303 "name": "spare", 00:22:34.303 "uuid": "5c75a7fa-e975-559a-bfd0-9f9dd4b35807", 00:22:34.303 "is_configured": true, 00:22:34.303 "data_offset": 0, 00:22:34.303 "data_size": 65536 00:22:34.303 }, 00:22:34.303 { 00:22:34.303 "name": "BaseBdev2", 00:22:34.303 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:34.303 "is_configured": true, 00:22:34.303 "data_offset": 0, 00:22:34.303 "data_size": 65536 00:22:34.303 }, 00:22:34.303 { 00:22:34.303 "name": "BaseBdev3", 00:22:34.303 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:34.303 "is_configured": true, 00:22:34.303 "data_offset": 0, 00:22:34.303 "data_size": 65536 00:22:34.303 } 00:22:34.303 ] 00:22:34.303 }' 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@660 -- # break 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.303 05:02:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.563 05:02:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.563 "name": "raid_bdev1", 00:22:34.563 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:34.563 "strip_size_kb": 64, 00:22:34.563 "state": "online", 00:22:34.563 "raid_level": "raid5f", 00:22:34.563 "superblock": false, 00:22:34.563 "num_base_bdevs": 3, 00:22:34.563 "num_base_bdevs_discovered": 3, 00:22:34.563 "num_base_bdevs_operational": 3, 00:22:34.563 "base_bdevs_list": [ 00:22:34.563 { 00:22:34.563 "name": "spare", 00:22:34.563 "uuid": "5c75a7fa-e975-559a-bfd0-9f9dd4b35807", 00:22:34.563 "is_configured": true, 00:22:34.563 "data_offset": 0, 00:22:34.563 "data_size": 65536 00:22:34.563 }, 00:22:34.563 { 00:22:34.563 "name": "BaseBdev2", 00:22:34.563 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:34.563 "is_configured": true, 00:22:34.563 "data_offset": 0, 00:22:34.563 "data_size": 65536 00:22:34.563 }, 00:22:34.563 { 00:22:34.563 "name": "BaseBdev3", 00:22:34.563 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:34.563 "is_configured": true, 00:22:34.563 "data_offset": 0, 00:22:34.563 "data_size": 65536 00:22:34.563 } 00:22:34.563 ] 00:22:34.563 }' 00:22:34.563 05:02:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.563 05:02:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.563 05:02:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.822 05:02:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.822 "name": "raid_bdev1", 00:22:34.822 "uuid": "ca98e352-6069-49f5-a2d9-97d8bd3328fd", 00:22:34.822 "strip_size_kb": 64, 00:22:34.822 "state": "online", 00:22:34.822 "raid_level": "raid5f", 00:22:34.822 "superblock": false, 00:22:34.822 "num_base_bdevs": 3, 00:22:34.822 "num_base_bdevs_discovered": 3, 00:22:34.822 "num_base_bdevs_operational": 3, 00:22:34.822 "base_bdevs_list": [ 00:22:34.822 { 00:22:34.822 "name": "spare", 00:22:34.822 "uuid": "5c75a7fa-e975-559a-bfd0-9f9dd4b35807", 00:22:34.822 "is_configured": true, 00:22:34.822 "data_offset": 0, 00:22:34.822 "data_size": 65536 00:22:34.822 }, 00:22:34.822 { 00:22:34.822 "name": "BaseBdev2", 00:22:34.822 "uuid": "71062739-a279-467f-9f57-7a801f2f937d", 00:22:34.822 "is_configured": true, 00:22:34.822 "data_offset": 0, 00:22:34.822 "data_size": 65536 00:22:34.822 }, 00:22:34.822 { 00:22:34.822 "name": "BaseBdev3", 00:22:34.822 "uuid": "70e86d7d-54ad-4d5b-98df-c3412c513aab", 00:22:34.822 "is_configured": true, 00:22:34.822 "data_offset": 0, 00:22:34.822 "data_size": 65536 00:22:34.822 } 00:22:34.822 ] 00:22:34.822 }' 00:22:34.822 05:02:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.822 05:02:58 -- common/autotest_common.sh@10 -- # set +x 00:22:35.081 05:02:58 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:35.340 [2024-11-18 05:02:58.714194] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:35.340 [2024-11-18 05:02:58.714368] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.340 [2024-11-18 05:02:58.714548] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.340 [2024-11-18 05:02:58.714774] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.340 [2024-11-18 05:02:58.714922] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:22:35.340 05:02:58 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.340 05:02:58 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:35.599 05:02:58 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:35.599 05:02:58 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:35.599 05:02:58 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:35.599 05:02:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:35.599 05:02:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:35.599 05:02:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:35.599 05:02:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:35.599 05:02:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:35.599 05:02:58 -- bdev/nbd_common.sh@12 -- # local i 00:22:35.599 05:02:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:35.599 05:02:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:35.599 05:02:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:35.861 /dev/nbd0 00:22:35.861 05:02:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:35.861 05:02:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:35.861 05:02:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:35.861 05:02:59 -- common/autotest_common.sh@867 -- # local i 00:22:35.861 05:02:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:35.861 05:02:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:35.861 05:02:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:35.861 05:02:59 -- common/autotest_common.sh@871 -- # break 00:22:35.861 05:02:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:35.861 05:02:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:35.861 05:02:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:35.861 1+0 records in 00:22:35.861 1+0 records out 00:22:35.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020454 s, 20.0 MB/s 00:22:35.861 05:02:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.861 05:02:59 -- common/autotest_common.sh@884 -- # size=4096 00:22:35.861 05:02:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.861 05:02:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:35.861 05:02:59 -- common/autotest_common.sh@887 -- # return 0 00:22:35.861 05:02:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:35.861 05:02:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:35.861 05:02:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:36.141 /dev/nbd1 00:22:36.141 05:02:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:36.141 05:02:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:36.141 05:02:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:36.141 05:02:59 -- common/autotest_common.sh@867 -- # local i 00:22:36.141 05:02:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:36.141 05:02:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:36.141 05:02:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:36.141 05:02:59 -- common/autotest_common.sh@871 -- # break 00:22:36.141 05:02:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:36.141 05:02:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:36.141 05:02:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.141 1+0 records in 00:22:36.141 1+0 records out 00:22:36.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186846 s, 21.9 MB/s 00:22:36.141 05:02:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.141 05:02:59 -- common/autotest_common.sh@884 -- # size=4096 00:22:36.141 05:02:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.141 05:02:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:36.141 05:02:59 -- common/autotest_common.sh@887 -- # return 0 00:22:36.141 05:02:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.141 05:02:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:36.141 05:02:59 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:36.141 05:02:59 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:36.141 05:02:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.141 05:02:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:36.141 05:02:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.141 05:02:59 -- bdev/nbd_common.sh@51 -- # local i 00:22:36.141 05:02:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.141 05:02:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:36.420 05:02:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:36.420 05:02:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:36.420 05:02:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:36.420 05:02:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.420 05:02:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.420 05:02:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:36.420 05:02:59 -- bdev/nbd_common.sh@41 -- # break 00:22:36.420 05:02:59 -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.420 05:02:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.420 05:02:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:36.689 05:03:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:36.689 05:03:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:36.689 05:03:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:36.689 05:03:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.689 05:03:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.689 05:03:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:36.689 05:03:00 -- bdev/nbd_common.sh@41 -- # break 00:22:36.689 05:03:00 -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.689 05:03:00 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:36.689 05:03:00 -- bdev/bdev_raid.sh@709 -- # killprocess 83561 00:22:36.689 05:03:00 -- common/autotest_common.sh@936 -- # '[' -z 83561 ']' 00:22:36.689 05:03:00 -- common/autotest_common.sh@940 -- # kill -0 83561 00:22:36.689 05:03:00 -- common/autotest_common.sh@941 -- # uname 00:22:36.689 05:03:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.689 05:03:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83561 00:22:36.689 05:03:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:36.689 05:03:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:36.689 05:03:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83561' 00:22:36.689 killing process with pid 83561 00:22:36.689 05:03:00 -- common/autotest_common.sh@955 -- # kill 83561 00:22:36.689 Received shutdown signal, test time was about 60.000000 seconds 00:22:36.689 00:22:36.689 Latency(us) 00:22:36.689 [2024-11-18T05:03:00.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.689 [2024-11-18T05:03:00.213Z] =================================================================================================================== 00:22:36.689 [2024-11-18T05:03:00.213Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:36.689 [2024-11-18 05:03:00.052170] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:36.689 05:03:00 -- common/autotest_common.sh@960 -- # wait 83561 00:22:36.948 [2024-11-18 05:03:00.317186] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:37.886 ************************************ 00:22:37.886 END TEST raid5f_rebuild_test 00:22:37.886 ************************************ 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:37.886 00:22:37.886 real 0m17.884s 00:22:37.886 user 0m25.118s 00:22:37.886 sys 0m2.298s 00:22:37.886 05:03:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:37.886 05:03:01 -- common/autotest_common.sh@10 -- # set +x 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:22:37.886 05:03:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:37.886 05:03:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:37.886 05:03:01 -- common/autotest_common.sh@10 -- # set +x 00:22:37.886 ************************************ 00:22:37.886 START TEST raid5f_rebuild_test_sb 00:22:37.886 ************************************ 00:22:37.886 05:03:01 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 true false 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:37.886 05:03:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:37.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=84049 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 84049 /var/tmp/spdk-raid.sock 00:22:37.887 05:03:01 -- common/autotest_common.sh@829 -- # '[' -z 84049 ']' 00:22:37.887 05:03:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:37.887 05:03:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:37.887 05:03:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.887 05:03:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:37.887 05:03:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.887 05:03:01 -- common/autotest_common.sh@10 -- # set +x 00:22:37.887 [2024-11-18 05:03:01.373355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:37.887 [2024-11-18 05:03:01.373725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84049 ] 00:22:37.887 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:37.887 Zero copy mechanism will not be used. 00:22:38.146 [2024-11-18 05:03:01.543784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.406 [2024-11-18 05:03:01.698288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.406 [2024-11-18 05:03:01.841773] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:38.976 05:03:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.976 05:03:02 -- common/autotest_common.sh@862 -- # return 0 00:22:38.976 05:03:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:38.976 05:03:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:38.976 05:03:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:39.235 BaseBdev1_malloc 00:22:39.235 05:03:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:39.235 [2024-11-18 05:03:02.708271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:39.235 [2024-11-18 05:03:02.708361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.235 [2024-11-18 05:03:02.708397] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:22:39.235 [2024-11-18 05:03:02.708413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.235 [2024-11-18 05:03:02.710643] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.235 [2024-11-18 05:03:02.710826] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:39.235 BaseBdev1 00:22:39.235 05:03:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:39.235 05:03:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:39.235 05:03:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:39.494 BaseBdev2_malloc 00:22:39.494 05:03:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:39.754 [2024-11-18 05:03:03.127442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:39.754 [2024-11-18 05:03:03.127525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.754 [2024-11-18 05:03:03.127587] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:22:39.754 [2024-11-18 05:03:03.127605] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.754 [2024-11-18 05:03:03.129819] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.754 [2024-11-18 05:03:03.129876] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:39.754 BaseBdev2 00:22:39.754 05:03:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:39.754 05:03:03 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:39.754 05:03:03 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:40.013 BaseBdev3_malloc 00:22:40.013 05:03:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:40.273 [2024-11-18 05:03:03.546471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:40.273 [2024-11-18 05:03:03.546585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.273 [2024-11-18 05:03:03.546613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:22:40.273 [2024-11-18 05:03:03.546627] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.273 [2024-11-18 05:03:03.548758] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.273 [2024-11-18 05:03:03.548815] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:40.273 BaseBdev3 00:22:40.273 05:03:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:40.273 spare_malloc 00:22:40.273 05:03:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:40.533 spare_delay 00:22:40.533 05:03:03 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:40.792 [2024-11-18 05:03:04.115786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:40.792 [2024-11-18 05:03:04.115851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.792 [2024-11-18 05:03:04.115877] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:22:40.792 [2024-11-18 05:03:04.115890] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.792 [2024-11-18 05:03:04.118181] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.792 [2024-11-18 05:03:04.118425] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:40.792 spare 00:22:40.792 05:03:04 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:22:40.792 [2024-11-18 05:03:04.307900] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:40.792 [2024-11-18 05:03:04.309767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.792 [2024-11-18 05:03:04.309991] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:40.792 [2024-11-18 05:03:04.310312] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:22:40.792 [2024-11-18 05:03:04.310443] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:40.792 [2024-11-18 05:03:04.310628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:22:41.051 [2024-11-18 05:03:04.315596] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:22:41.052 [2024-11-18 05:03:04.315745] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:22:41.052 [2024-11-18 05:03:04.316116] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:41.052 "name": "raid_bdev1", 00:22:41.052 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:41.052 "strip_size_kb": 64, 00:22:41.052 "state": "online", 00:22:41.052 "raid_level": "raid5f", 00:22:41.052 "superblock": true, 00:22:41.052 "num_base_bdevs": 3, 00:22:41.052 "num_base_bdevs_discovered": 3, 00:22:41.052 "num_base_bdevs_operational": 3, 00:22:41.052 "base_bdevs_list": [ 00:22:41.052 { 00:22:41.052 "name": "BaseBdev1", 00:22:41.052 "uuid": "f5482f77-b0c1-50d1-88b0-8e142f32aac4", 00:22:41.052 "is_configured": true, 00:22:41.052 "data_offset": 2048, 00:22:41.052 "data_size": 63488 00:22:41.052 }, 00:22:41.052 { 00:22:41.052 "name": "BaseBdev2", 00:22:41.052 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:41.052 "is_configured": true, 00:22:41.052 "data_offset": 2048, 00:22:41.052 "data_size": 63488 00:22:41.052 }, 00:22:41.052 { 00:22:41.052 "name": "BaseBdev3", 00:22:41.052 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:41.052 "is_configured": true, 00:22:41.052 "data_offset": 2048, 00:22:41.052 "data_size": 63488 00:22:41.052 } 00:22:41.052 ] 00:22:41.052 }' 00:22:41.052 05:03:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:41.052 05:03:04 -- common/autotest_common.sh@10 -- # set +x 00:22:41.311 05:03:04 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:41.311 05:03:04 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:41.570 [2024-11-18 05:03:05.025120] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.570 05:03:05 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:22:41.570 05:03:05 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:41.570 05:03:05 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.830 05:03:05 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:41.830 05:03:05 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:41.830 05:03:05 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:41.830 05:03:05 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:41.830 05:03:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:41.830 05:03:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:41.830 05:03:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:41.830 05:03:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:41.830 05:03:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:41.830 05:03:05 -- bdev/nbd_common.sh@12 -- # local i 00:22:41.830 05:03:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:41.830 05:03:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.830 05:03:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:42.089 [2024-11-18 05:03:05.405081] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:22:42.089 /dev/nbd0 00:22:42.089 05:03:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:42.089 05:03:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:42.089 05:03:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:42.089 05:03:05 -- common/autotest_common.sh@867 -- # local i 00:22:42.089 05:03:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:42.089 05:03:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:42.089 05:03:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:42.089 05:03:05 -- common/autotest_common.sh@871 -- # break 00:22:42.089 05:03:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:42.089 05:03:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:42.089 05:03:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:42.089 1+0 records in 00:22:42.089 1+0 records out 00:22:42.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187756 s, 21.8 MB/s 00:22:42.089 05:03:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.089 05:03:05 -- common/autotest_common.sh@884 -- # size=4096 00:22:42.089 05:03:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.089 05:03:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:42.089 05:03:05 -- common/autotest_common.sh@887 -- # return 0 00:22:42.089 05:03:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:42.089 05:03:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:42.089 05:03:05 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:42.089 05:03:05 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:22:42.089 05:03:05 -- bdev/bdev_raid.sh@582 -- # echo 128 00:22:42.089 05:03:05 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:22:42.348 496+0 records in 00:22:42.348 496+0 records out 00:22:42.348 65011712 bytes (65 MB, 62 MiB) copied, 0.325706 s, 200 MB/s 00:22:42.348 05:03:05 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:42.348 05:03:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:42.348 05:03:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:42.348 05:03:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:42.348 05:03:05 -- bdev/nbd_common.sh@51 -- # local i 00:22:42.348 05:03:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:42.348 05:03:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:42.607 [2024-11-18 05:03:05.967267] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.607 05:03:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:42.607 05:03:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:42.607 05:03:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:42.607 05:03:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:42.607 05:03:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.607 05:03:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:42.607 05:03:05 -- bdev/nbd_common.sh@41 -- # break 00:22:42.607 05:03:05 -- bdev/nbd_common.sh@45 -- # return 0 00:22:42.607 05:03:05 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:42.866 [2024-11-18 05:03:06.228976] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.866 05:03:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.125 05:03:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.125 "name": "raid_bdev1", 00:22:43.125 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:43.125 "strip_size_kb": 64, 00:22:43.125 "state": "online", 00:22:43.125 "raid_level": "raid5f", 00:22:43.125 "superblock": true, 00:22:43.125 "num_base_bdevs": 3, 00:22:43.125 "num_base_bdevs_discovered": 2, 00:22:43.125 "num_base_bdevs_operational": 2, 00:22:43.125 "base_bdevs_list": [ 00:22:43.125 { 00:22:43.125 "name": null, 00:22:43.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.125 "is_configured": false, 00:22:43.125 "data_offset": 2048, 00:22:43.125 "data_size": 63488 00:22:43.125 }, 00:22:43.125 { 00:22:43.125 "name": "BaseBdev2", 00:22:43.125 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:43.125 "is_configured": true, 00:22:43.125 "data_offset": 2048, 00:22:43.125 "data_size": 63488 00:22:43.125 }, 00:22:43.125 { 00:22:43.125 "name": "BaseBdev3", 00:22:43.125 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:43.125 "is_configured": true, 00:22:43.125 "data_offset": 2048, 00:22:43.126 "data_size": 63488 00:22:43.126 } 00:22:43.126 ] 00:22:43.126 }' 00:22:43.126 05:03:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.126 05:03:06 -- common/autotest_common.sh@10 -- # set +x 00:22:43.385 05:03:06 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:43.385 [2024-11-18 05:03:06.849109] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:43.385 [2024-11-18 05:03:06.849149] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:43.385 [2024-11-18 05:03:06.859741] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028830 00:22:43.385 [2024-11-18 05:03:06.865527] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:43.385 05:03:06 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:44.763 05:03:07 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.763 05:03:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:44.763 05:03:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:44.763 05:03:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:44.763 05:03:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:44.763 05:03:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.763 05:03:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.763 05:03:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:44.763 "name": "raid_bdev1", 00:22:44.763 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:44.763 "strip_size_kb": 64, 00:22:44.763 "state": "online", 00:22:44.763 "raid_level": "raid5f", 00:22:44.763 "superblock": true, 00:22:44.763 "num_base_bdevs": 3, 00:22:44.763 "num_base_bdevs_discovered": 3, 00:22:44.763 "num_base_bdevs_operational": 3, 00:22:44.763 "process": { 00:22:44.763 "type": "rebuild", 00:22:44.763 "target": "spare", 00:22:44.763 "progress": { 00:22:44.763 "blocks": 22528, 00:22:44.763 "percent": 17 00:22:44.763 } 00:22:44.763 }, 00:22:44.763 "base_bdevs_list": [ 00:22:44.763 { 00:22:44.763 "name": "spare", 00:22:44.763 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:44.763 "is_configured": true, 00:22:44.763 "data_offset": 2048, 00:22:44.763 "data_size": 63488 00:22:44.763 }, 00:22:44.763 { 00:22:44.763 "name": "BaseBdev2", 00:22:44.763 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:44.763 "is_configured": true, 00:22:44.763 "data_offset": 2048, 00:22:44.763 "data_size": 63488 00:22:44.763 }, 00:22:44.763 { 00:22:44.763 "name": "BaseBdev3", 00:22:44.763 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:44.764 "is_configured": true, 00:22:44.764 "data_offset": 2048, 00:22:44.764 "data_size": 63488 00:22:44.764 } 00:22:44.764 ] 00:22:44.764 }' 00:22:44.764 05:03:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:44.764 05:03:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.764 05:03:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:44.764 05:03:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.764 05:03:08 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:45.023 [2024-11-18 05:03:08.307230] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:45.023 [2024-11-18 05:03:08.377799] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:45.023 [2024-11-18 05:03:08.378066] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.023 05:03:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.282 05:03:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.282 "name": "raid_bdev1", 00:22:45.282 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:45.282 "strip_size_kb": 64, 00:22:45.282 "state": "online", 00:22:45.282 "raid_level": "raid5f", 00:22:45.282 "superblock": true, 00:22:45.282 "num_base_bdevs": 3, 00:22:45.282 "num_base_bdevs_discovered": 2, 00:22:45.282 "num_base_bdevs_operational": 2, 00:22:45.282 "base_bdevs_list": [ 00:22:45.282 { 00:22:45.282 "name": null, 00:22:45.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.282 "is_configured": false, 00:22:45.282 "data_offset": 2048, 00:22:45.282 "data_size": 63488 00:22:45.282 }, 00:22:45.282 { 00:22:45.282 "name": "BaseBdev2", 00:22:45.282 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:45.282 "is_configured": true, 00:22:45.282 "data_offset": 2048, 00:22:45.282 "data_size": 63488 00:22:45.282 }, 00:22:45.282 { 00:22:45.282 "name": "BaseBdev3", 00:22:45.282 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:45.282 "is_configured": true, 00:22:45.282 "data_offset": 2048, 00:22:45.282 "data_size": 63488 00:22:45.282 } 00:22:45.282 ] 00:22:45.282 }' 00:22:45.282 05:03:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.282 05:03:08 -- common/autotest_common.sh@10 -- # set +x 00:22:45.541 05:03:08 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:45.541 05:03:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:45.541 05:03:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:45.541 05:03:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:45.541 05:03:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:45.541 05:03:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.541 05:03:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.800 05:03:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:45.801 "name": "raid_bdev1", 00:22:45.801 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:45.801 "strip_size_kb": 64, 00:22:45.801 "state": "online", 00:22:45.801 "raid_level": "raid5f", 00:22:45.801 "superblock": true, 00:22:45.801 "num_base_bdevs": 3, 00:22:45.801 "num_base_bdevs_discovered": 2, 00:22:45.801 "num_base_bdevs_operational": 2, 00:22:45.801 "base_bdevs_list": [ 00:22:45.801 { 00:22:45.801 "name": null, 00:22:45.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.801 "is_configured": false, 00:22:45.801 "data_offset": 2048, 00:22:45.801 "data_size": 63488 00:22:45.801 }, 00:22:45.801 { 00:22:45.801 "name": "BaseBdev2", 00:22:45.801 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:45.801 "is_configured": true, 00:22:45.801 "data_offset": 2048, 00:22:45.801 "data_size": 63488 00:22:45.801 }, 00:22:45.801 { 00:22:45.801 "name": "BaseBdev3", 00:22:45.801 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:45.801 "is_configured": true, 00:22:45.801 "data_offset": 2048, 00:22:45.801 "data_size": 63488 00:22:45.801 } 00:22:45.801 ] 00:22:45.801 }' 00:22:45.801 05:03:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:45.801 05:03:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:45.801 05:03:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:45.801 05:03:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:45.801 05:03:09 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:46.059 [2024-11-18 05:03:09.368245] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:46.059 [2024-11-18 05:03:09.368308] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:46.059 [2024-11-18 05:03:09.379783] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028900 00:22:46.059 [2024-11-18 05:03:09.385749] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:46.059 05:03:09 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:46.996 05:03:10 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.996 05:03:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:46.996 05:03:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:46.996 05:03:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:46.996 05:03:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:46.996 05:03:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.996 05:03:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.255 "name": "raid_bdev1", 00:22:47.255 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:47.255 "strip_size_kb": 64, 00:22:47.255 "state": "online", 00:22:47.255 "raid_level": "raid5f", 00:22:47.255 "superblock": true, 00:22:47.255 "num_base_bdevs": 3, 00:22:47.255 "num_base_bdevs_discovered": 3, 00:22:47.255 "num_base_bdevs_operational": 3, 00:22:47.255 "process": { 00:22:47.255 "type": "rebuild", 00:22:47.255 "target": "spare", 00:22:47.255 "progress": { 00:22:47.255 "blocks": 24576, 00:22:47.255 "percent": 19 00:22:47.255 } 00:22:47.255 }, 00:22:47.255 "base_bdevs_list": [ 00:22:47.255 { 00:22:47.255 "name": "spare", 00:22:47.255 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:47.255 "is_configured": true, 00:22:47.255 "data_offset": 2048, 00:22:47.255 "data_size": 63488 00:22:47.255 }, 00:22:47.255 { 00:22:47.255 "name": "BaseBdev2", 00:22:47.255 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:47.255 "is_configured": true, 00:22:47.255 "data_offset": 2048, 00:22:47.255 "data_size": 63488 00:22:47.255 }, 00:22:47.255 { 00:22:47.255 "name": "BaseBdev3", 00:22:47.255 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:47.255 "is_configured": true, 00:22:47.255 "data_offset": 2048, 00:22:47.255 "data_size": 63488 00:22:47.255 } 00:22:47.255 ] 00:22:47.255 }' 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:47.255 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@657 -- # local timeout=564 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.255 05:03:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.514 05:03:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.514 "name": "raid_bdev1", 00:22:47.514 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:47.515 "strip_size_kb": 64, 00:22:47.515 "state": "online", 00:22:47.515 "raid_level": "raid5f", 00:22:47.515 "superblock": true, 00:22:47.515 "num_base_bdevs": 3, 00:22:47.515 "num_base_bdevs_discovered": 3, 00:22:47.515 "num_base_bdevs_operational": 3, 00:22:47.515 "process": { 00:22:47.515 "type": "rebuild", 00:22:47.515 "target": "spare", 00:22:47.515 "progress": { 00:22:47.515 "blocks": 28672, 00:22:47.515 "percent": 22 00:22:47.515 } 00:22:47.515 }, 00:22:47.515 "base_bdevs_list": [ 00:22:47.515 { 00:22:47.515 "name": "spare", 00:22:47.515 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:47.515 "is_configured": true, 00:22:47.515 "data_offset": 2048, 00:22:47.515 "data_size": 63488 00:22:47.515 }, 00:22:47.515 { 00:22:47.515 "name": "BaseBdev2", 00:22:47.515 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:47.515 "is_configured": true, 00:22:47.515 "data_offset": 2048, 00:22:47.515 "data_size": 63488 00:22:47.515 }, 00:22:47.515 { 00:22:47.515 "name": "BaseBdev3", 00:22:47.515 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:47.515 "is_configured": true, 00:22:47.515 "data_offset": 2048, 00:22:47.515 "data_size": 63488 00:22:47.515 } 00:22:47.515 ] 00:22:47.515 }' 00:22:47.515 05:03:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.515 05:03:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.515 05:03:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:47.515 05:03:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.515 05:03:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:48.452 05:03:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:48.452 05:03:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.452 05:03:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.452 05:03:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:48.452 05:03:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:48.452 05:03:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.452 05:03:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.452 05:03:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.711 05:03:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.711 "name": "raid_bdev1", 00:22:48.711 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:48.711 "strip_size_kb": 64, 00:22:48.711 "state": "online", 00:22:48.711 "raid_level": "raid5f", 00:22:48.711 "superblock": true, 00:22:48.711 "num_base_bdevs": 3, 00:22:48.711 "num_base_bdevs_discovered": 3, 00:22:48.711 "num_base_bdevs_operational": 3, 00:22:48.711 "process": { 00:22:48.711 "type": "rebuild", 00:22:48.711 "target": "spare", 00:22:48.711 "progress": { 00:22:48.711 "blocks": 53248, 00:22:48.711 "percent": 41 00:22:48.711 } 00:22:48.711 }, 00:22:48.711 "base_bdevs_list": [ 00:22:48.711 { 00:22:48.711 "name": "spare", 00:22:48.711 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:48.711 "is_configured": true, 00:22:48.711 "data_offset": 2048, 00:22:48.711 "data_size": 63488 00:22:48.711 }, 00:22:48.711 { 00:22:48.711 "name": "BaseBdev2", 00:22:48.711 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:48.711 "is_configured": true, 00:22:48.711 "data_offset": 2048, 00:22:48.711 "data_size": 63488 00:22:48.711 }, 00:22:48.711 { 00:22:48.711 "name": "BaseBdev3", 00:22:48.711 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:48.711 "is_configured": true, 00:22:48.711 "data_offset": 2048, 00:22:48.711 "data_size": 63488 00:22:48.711 } 00:22:48.711 ] 00:22:48.711 }' 00:22:48.711 05:03:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.711 05:03:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.711 05:03:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.711 05:03:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.711 05:03:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:49.648 05:03:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:49.648 05:03:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.648 05:03:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.648 05:03:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:49.648 05:03:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:49.648 05:03:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.648 05:03:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.648 05:03:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.908 05:03:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:49.908 "name": "raid_bdev1", 00:22:49.908 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:49.908 "strip_size_kb": 64, 00:22:49.908 "state": "online", 00:22:49.908 "raid_level": "raid5f", 00:22:49.908 "superblock": true, 00:22:49.908 "num_base_bdevs": 3, 00:22:49.908 "num_base_bdevs_discovered": 3, 00:22:49.908 "num_base_bdevs_operational": 3, 00:22:49.908 "process": { 00:22:49.908 "type": "rebuild", 00:22:49.908 "target": "spare", 00:22:49.908 "progress": { 00:22:49.908 "blocks": 79872, 00:22:49.908 "percent": 62 00:22:49.908 } 00:22:49.908 }, 00:22:49.908 "base_bdevs_list": [ 00:22:49.908 { 00:22:49.908 "name": "spare", 00:22:49.908 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:49.908 "is_configured": true, 00:22:49.908 "data_offset": 2048, 00:22:49.908 "data_size": 63488 00:22:49.908 }, 00:22:49.908 { 00:22:49.908 "name": "BaseBdev2", 00:22:49.908 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:49.908 "is_configured": true, 00:22:49.908 "data_offset": 2048, 00:22:49.908 "data_size": 63488 00:22:49.908 }, 00:22:49.908 { 00:22:49.908 "name": "BaseBdev3", 00:22:49.908 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:49.908 "is_configured": true, 00:22:49.908 "data_offset": 2048, 00:22:49.908 "data_size": 63488 00:22:49.908 } 00:22:49.908 ] 00:22:49.908 }' 00:22:49.908 05:03:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:49.908 05:03:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.908 05:03:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:49.908 05:03:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.908 05:03:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.287 "name": "raid_bdev1", 00:22:51.287 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:51.287 "strip_size_kb": 64, 00:22:51.287 "state": "online", 00:22:51.287 "raid_level": "raid5f", 00:22:51.287 "superblock": true, 00:22:51.287 "num_base_bdevs": 3, 00:22:51.287 "num_base_bdevs_discovered": 3, 00:22:51.287 "num_base_bdevs_operational": 3, 00:22:51.287 "process": { 00:22:51.287 "type": "rebuild", 00:22:51.287 "target": "spare", 00:22:51.287 "progress": { 00:22:51.287 "blocks": 104448, 00:22:51.287 "percent": 82 00:22:51.287 } 00:22:51.287 }, 00:22:51.287 "base_bdevs_list": [ 00:22:51.287 { 00:22:51.287 "name": "spare", 00:22:51.287 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:51.287 "is_configured": true, 00:22:51.287 "data_offset": 2048, 00:22:51.287 "data_size": 63488 00:22:51.287 }, 00:22:51.287 { 00:22:51.287 "name": "BaseBdev2", 00:22:51.287 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:51.287 "is_configured": true, 00:22:51.287 "data_offset": 2048, 00:22:51.287 "data_size": 63488 00:22:51.287 }, 00:22:51.287 { 00:22:51.287 "name": "BaseBdev3", 00:22:51.287 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:51.287 "is_configured": true, 00:22:51.287 "data_offset": 2048, 00:22:51.287 "data_size": 63488 00:22:51.287 } 00:22:51.287 ] 00:22:51.287 }' 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.287 05:03:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:52.225 [2024-11-18 05:03:15.633730] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:52.225 [2024-11-18 05:03:15.633997] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:52.225 [2024-11-18 05:03:15.634137] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.225 05:03:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:52.225 05:03:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.225 05:03:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.225 05:03:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:52.225 05:03:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:52.225 05:03:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.225 05:03:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.225 05:03:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.485 "name": "raid_bdev1", 00:22:52.485 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:52.485 "strip_size_kb": 64, 00:22:52.485 "state": "online", 00:22:52.485 "raid_level": "raid5f", 00:22:52.485 "superblock": true, 00:22:52.485 "num_base_bdevs": 3, 00:22:52.485 "num_base_bdevs_discovered": 3, 00:22:52.485 "num_base_bdevs_operational": 3, 00:22:52.485 "base_bdevs_list": [ 00:22:52.485 { 00:22:52.485 "name": "spare", 00:22:52.485 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:52.485 "is_configured": true, 00:22:52.485 "data_offset": 2048, 00:22:52.485 "data_size": 63488 00:22:52.485 }, 00:22:52.485 { 00:22:52.485 "name": "BaseBdev2", 00:22:52.485 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:52.485 "is_configured": true, 00:22:52.485 "data_offset": 2048, 00:22:52.485 "data_size": 63488 00:22:52.485 }, 00:22:52.485 { 00:22:52.485 "name": "BaseBdev3", 00:22:52.485 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:52.485 "is_configured": true, 00:22:52.485 "data_offset": 2048, 00:22:52.485 "data_size": 63488 00:22:52.485 } 00:22:52.485 ] 00:22:52.485 }' 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@660 -- # break 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.485 05:03:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.745 "name": "raid_bdev1", 00:22:52.745 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:52.745 "strip_size_kb": 64, 00:22:52.745 "state": "online", 00:22:52.745 "raid_level": "raid5f", 00:22:52.745 "superblock": true, 00:22:52.745 "num_base_bdevs": 3, 00:22:52.745 "num_base_bdevs_discovered": 3, 00:22:52.745 "num_base_bdevs_operational": 3, 00:22:52.745 "base_bdevs_list": [ 00:22:52.745 { 00:22:52.745 "name": "spare", 00:22:52.745 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:52.745 "is_configured": true, 00:22:52.745 "data_offset": 2048, 00:22:52.745 "data_size": 63488 00:22:52.745 }, 00:22:52.745 { 00:22:52.745 "name": "BaseBdev2", 00:22:52.745 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:52.745 "is_configured": true, 00:22:52.745 "data_offset": 2048, 00:22:52.745 "data_size": 63488 00:22:52.745 }, 00:22:52.745 { 00:22:52.745 "name": "BaseBdev3", 00:22:52.745 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:52.745 "is_configured": true, 00:22:52.745 "data_offset": 2048, 00:22:52.745 "data_size": 63488 00:22:52.745 } 00:22:52.745 ] 00:22:52.745 }' 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.745 05:03:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.004 05:03:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:53.004 "name": "raid_bdev1", 00:22:53.004 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:53.004 "strip_size_kb": 64, 00:22:53.004 "state": "online", 00:22:53.004 "raid_level": "raid5f", 00:22:53.004 "superblock": true, 00:22:53.004 "num_base_bdevs": 3, 00:22:53.004 "num_base_bdevs_discovered": 3, 00:22:53.004 "num_base_bdevs_operational": 3, 00:22:53.004 "base_bdevs_list": [ 00:22:53.004 { 00:22:53.004 "name": "spare", 00:22:53.004 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:53.004 "is_configured": true, 00:22:53.004 "data_offset": 2048, 00:22:53.004 "data_size": 63488 00:22:53.004 }, 00:22:53.004 { 00:22:53.004 "name": "BaseBdev2", 00:22:53.004 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:53.005 "is_configured": true, 00:22:53.005 "data_offset": 2048, 00:22:53.005 "data_size": 63488 00:22:53.005 }, 00:22:53.005 { 00:22:53.005 "name": "BaseBdev3", 00:22:53.005 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:53.005 "is_configured": true, 00:22:53.005 "data_offset": 2048, 00:22:53.005 "data_size": 63488 00:22:53.005 } 00:22:53.005 ] 00:22:53.005 }' 00:22:53.005 05:03:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:53.005 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:22:53.264 05:03:16 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:53.523 [2024-11-18 05:03:16.896438] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:53.523 [2024-11-18 05:03:16.896469] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:53.523 [2024-11-18 05:03:16.896551] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:53.523 [2024-11-18 05:03:16.896630] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:53.523 [2024-11-18 05:03:16.896647] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:22:53.523 05:03:16 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:53.523 05:03:16 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.783 05:03:17 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:53.783 05:03:17 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:53.783 05:03:17 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:53.783 05:03:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.783 05:03:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:53.783 05:03:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:53.783 05:03:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:53.783 05:03:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:53.783 05:03:17 -- bdev/nbd_common.sh@12 -- # local i 00:22:53.783 05:03:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:53.783 05:03:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.783 05:03:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:54.042 /dev/nbd0 00:22:54.043 05:03:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:54.043 05:03:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:54.043 05:03:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:54.043 05:03:17 -- common/autotest_common.sh@867 -- # local i 00:22:54.043 05:03:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:54.043 05:03:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:54.043 05:03:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:54.043 05:03:17 -- common/autotest_common.sh@871 -- # break 00:22:54.043 05:03:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:54.043 05:03:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:54.043 05:03:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.043 1+0 records in 00:22:54.043 1+0 records out 00:22:54.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022294 s, 18.4 MB/s 00:22:54.043 05:03:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.043 05:03:17 -- common/autotest_common.sh@884 -- # size=4096 00:22:54.043 05:03:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.043 05:03:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:54.043 05:03:17 -- common/autotest_common.sh@887 -- # return 0 00:22:54.043 05:03:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:54.043 05:03:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.043 05:03:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:54.302 /dev/nbd1 00:22:54.302 05:03:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:54.302 05:03:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:54.302 05:03:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:54.302 05:03:17 -- common/autotest_common.sh@867 -- # local i 00:22:54.302 05:03:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:54.302 05:03:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:54.302 05:03:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:54.302 05:03:17 -- common/autotest_common.sh@871 -- # break 00:22:54.302 05:03:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:54.302 05:03:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:54.302 05:03:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.302 1+0 records in 00:22:54.302 1+0 records out 00:22:54.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336763 s, 12.2 MB/s 00:22:54.302 05:03:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.302 05:03:17 -- common/autotest_common.sh@884 -- # size=4096 00:22:54.302 05:03:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.302 05:03:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:54.302 05:03:17 -- common/autotest_common.sh@887 -- # return 0 00:22:54.302 05:03:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:54.302 05:03:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.302 05:03:17 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:54.562 05:03:17 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:54.562 05:03:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:54.562 05:03:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:54.562 05:03:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:54.562 05:03:17 -- bdev/nbd_common.sh@51 -- # local i 00:22:54.562 05:03:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.562 05:03:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:54.562 05:03:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:54.562 05:03:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:54.562 05:03:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:54.562 05:03:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.562 05:03:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.562 05:03:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:54.562 05:03:18 -- bdev/nbd_common.sh@41 -- # break 00:22:54.562 05:03:18 -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.562 05:03:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.562 05:03:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:54.822 05:03:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:54.822 05:03:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:54.822 05:03:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:54.822 05:03:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.822 05:03:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.822 05:03:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:54.822 05:03:18 -- bdev/nbd_common.sh@41 -- # break 00:22:54.822 05:03:18 -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.822 05:03:18 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:54.822 05:03:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:54.822 05:03:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:54.822 05:03:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:55.082 05:03:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:55.341 [2024-11-18 05:03:18.735367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:55.341 [2024-11-18 05:03:18.735456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.341 [2024-11-18 05:03:18.735485] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:22:55.341 [2024-11-18 05:03:18.735499] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.341 [2024-11-18 05:03:18.737659] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.341 [2024-11-18 05:03:18.737700] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:55.341 [2024-11-18 05:03:18.737786] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:55.341 [2024-11-18 05:03:18.737844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:55.341 BaseBdev1 00:22:55.341 05:03:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:55.341 05:03:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:22:55.341 05:03:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:22:55.600 05:03:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:55.876 [2024-11-18 05:03:19.196473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:55.876 [2024-11-18 05:03:19.196554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.876 [2024-11-18 05:03:19.196581] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:22:55.876 [2024-11-18 05:03:19.196596] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.876 [2024-11-18 05:03:19.197027] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.876 [2024-11-18 05:03:19.197091] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:55.876 [2024-11-18 05:03:19.197211] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:22:55.876 [2024-11-18 05:03:19.197290] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:22:55.876 [2024-11-18 05:03:19.197308] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.876 [2024-11-18 05:03:19.197338] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ae80 name raid_bdev1, state configuring 00:22:55.876 [2024-11-18 05:03:19.197414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:55.876 BaseBdev2 00:22:55.876 05:03:19 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:55.877 05:03:19 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:55.877 05:03:19 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:56.137 05:03:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:56.137 [2024-11-18 05:03:19.572546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:56.137 [2024-11-18 05:03:19.572765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.137 [2024-11-18 05:03:19.572857] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:22:56.137 [2024-11-18 05:03:19.572875] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.137 [2024-11-18 05:03:19.573345] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.137 [2024-11-18 05:03:19.573376] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:56.137 [2024-11-18 05:03:19.573472] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:56.137 [2024-11-18 05:03:19.573498] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:56.137 BaseBdev3 00:22:56.137 05:03:19 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:56.396 05:03:19 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:56.655 [2024-11-18 05:03:19.931395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:56.655 [2024-11-18 05:03:19.931608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.655 [2024-11-18 05:03:19.931647] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:22:56.655 [2024-11-18 05:03:19.931660] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.655 [2024-11-18 05:03:19.932127] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.655 [2024-11-18 05:03:19.932148] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:56.655 [2024-11-18 05:03:19.932282] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:56.655 [2024-11-18 05:03:19.932311] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.655 spare 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:56.655 05:03:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.656 05:03:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.656 [2024-11-18 05:03:20.032459] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000b480 00:22:56.656 [2024-11-18 05:03:20.032685] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:56.656 [2024-11-18 05:03:20.032886] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000046fb0 00:22:56.656 [2024-11-18 05:03:20.037250] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000b480 00:22:56.656 [2024-11-18 05:03:20.037393] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000b480 00:22:56.656 [2024-11-18 05:03:20.037677] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.914 05:03:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:56.914 "name": "raid_bdev1", 00:22:56.914 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:56.914 "strip_size_kb": 64, 00:22:56.914 "state": "online", 00:22:56.914 "raid_level": "raid5f", 00:22:56.914 "superblock": true, 00:22:56.914 "num_base_bdevs": 3, 00:22:56.914 "num_base_bdevs_discovered": 3, 00:22:56.914 "num_base_bdevs_operational": 3, 00:22:56.914 "base_bdevs_list": [ 00:22:56.914 { 00:22:56.914 "name": "spare", 00:22:56.914 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:56.914 "is_configured": true, 00:22:56.914 "data_offset": 2048, 00:22:56.914 "data_size": 63488 00:22:56.914 }, 00:22:56.914 { 00:22:56.914 "name": "BaseBdev2", 00:22:56.914 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:56.914 "is_configured": true, 00:22:56.914 "data_offset": 2048, 00:22:56.914 "data_size": 63488 00:22:56.914 }, 00:22:56.914 { 00:22:56.914 "name": "BaseBdev3", 00:22:56.914 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:56.914 "is_configured": true, 00:22:56.915 "data_offset": 2048, 00:22:56.915 "data_size": 63488 00:22:56.915 } 00:22:56.915 ] 00:22:56.915 }' 00:22:56.915 05:03:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:56.915 05:03:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.173 05:03:20 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.173 05:03:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.173 05:03:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:57.173 05:03:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:57.173 05:03:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.173 05:03:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.174 05:03:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.432 05:03:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.432 "name": "raid_bdev1", 00:22:57.432 "uuid": "5f25461a-9634-439c-871d-5eb1ec5772e7", 00:22:57.432 "strip_size_kb": 64, 00:22:57.432 "state": "online", 00:22:57.432 "raid_level": "raid5f", 00:22:57.432 "superblock": true, 00:22:57.432 "num_base_bdevs": 3, 00:22:57.432 "num_base_bdevs_discovered": 3, 00:22:57.432 "num_base_bdevs_operational": 3, 00:22:57.432 "base_bdevs_list": [ 00:22:57.432 { 00:22:57.432 "name": "spare", 00:22:57.432 "uuid": "ff58185a-7bad-52d8-ba98-676c41fe5ed7", 00:22:57.432 "is_configured": true, 00:22:57.432 "data_offset": 2048, 00:22:57.432 "data_size": 63488 00:22:57.432 }, 00:22:57.432 { 00:22:57.432 "name": "BaseBdev2", 00:22:57.432 "uuid": "1326bbf6-bb2a-524e-8635-3e13772b1796", 00:22:57.432 "is_configured": true, 00:22:57.432 "data_offset": 2048, 00:22:57.432 "data_size": 63488 00:22:57.432 }, 00:22:57.432 { 00:22:57.432 "name": "BaseBdev3", 00:22:57.432 "uuid": "c3aa8944-bd54-5706-b908-f0d0c8276e14", 00:22:57.432 "is_configured": true, 00:22:57.432 "data_offset": 2048, 00:22:57.432 "data_size": 63488 00:22:57.432 } 00:22:57.432 ] 00:22:57.432 }' 00:22:57.432 05:03:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.432 05:03:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:57.432 05:03:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.432 05:03:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:57.432 05:03:20 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.432 05:03:20 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:57.691 05:03:20 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.691 05:03:20 -- bdev/bdev_raid.sh@709 -- # killprocess 84049 00:22:57.691 05:03:20 -- common/autotest_common.sh@936 -- # '[' -z 84049 ']' 00:22:57.691 05:03:20 -- common/autotest_common.sh@940 -- # kill -0 84049 00:22:57.691 05:03:20 -- common/autotest_common.sh@941 -- # uname 00:22:57.691 05:03:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.691 05:03:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84049 00:22:57.691 killing process with pid 84049 00:22:57.691 Received shutdown signal, test time was about 60.000000 seconds 00:22:57.691 00:22:57.691 Latency(us) 00:22:57.691 [2024-11-18T05:03:21.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.691 [2024-11-18T05:03:21.215Z] =================================================================================================================== 00:22:57.691 [2024-11-18T05:03:21.215Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.691 05:03:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:57.691 05:03:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:57.691 05:03:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84049' 00:22:57.691 05:03:21 -- common/autotest_common.sh@955 -- # kill 84049 00:22:57.691 05:03:21 -- common/autotest_common.sh@960 -- # wait 84049 00:22:57.691 [2024-11-18 05:03:21.001512] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:57.692 [2024-11-18 05:03:21.001619] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.692 [2024-11-18 05:03:21.001735] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:57.692 [2024-11-18 05:03:21.001754] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b480 name raid_bdev1, state offline 00:22:57.951 [2024-11-18 05:03:21.254663] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:58.889 ************************************ 00:22:58.889 END TEST raid5f_rebuild_test_sb 00:22:58.889 ************************************ 00:22:58.889 05:03:22 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:58.889 00:22:58.889 real 0m20.855s 00:22:58.889 user 0m30.602s 00:22:58.889 sys 0m2.684s 00:22:58.889 05:03:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:58.889 05:03:22 -- common/autotest_common.sh@10 -- # set +x 00:22:58.889 05:03:22 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:22:58.889 05:03:22 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:22:58.889 05:03:22 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:58.889 05:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:58.889 05:03:22 -- common/autotest_common.sh@10 -- # set +x 00:22:58.889 ************************************ 00:22:58.889 START TEST raid5f_state_function_test 00:22:58.889 ************************************ 00:22:58.889 05:03:22 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 false 00:22:58.889 05:03:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:58.889 05:03:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:22:58.889 05:03:22 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:22:58.889 05:03:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:58.889 05:03:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:58.890 Process raid pid: 84609 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=84609 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 84609' 00:22:58.890 05:03:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 84609 /var/tmp/spdk-raid.sock 00:22:58.890 05:03:22 -- common/autotest_common.sh@829 -- # '[' -z 84609 ']' 00:22:58.890 05:03:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:58.890 05:03:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.890 05:03:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:58.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:58.890 05:03:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.890 05:03:22 -- common/autotest_common.sh@10 -- # set +x 00:22:58.890 [2024-11-18 05:03:22.262362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:58.890 [2024-11-18 05:03:22.262658] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.149 [2024-11-18 05:03:22.416518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.149 [2024-11-18 05:03:22.574786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.409 [2024-11-18 05:03:22.731411] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.977 05:03:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.977 05:03:23 -- common/autotest_common.sh@862 -- # return 0 00:22:59.977 05:03:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:59.977 [2024-11-18 05:03:23.448957] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:59.977 [2024-11-18 05:03:23.449011] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:59.977 [2024-11-18 05:03:23.449025] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:59.977 [2024-11-18 05:03:23.449037] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:59.977 [2024-11-18 05:03:23.449045] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:59.977 [2024-11-18 05:03:23.449055] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:59.978 [2024-11-18 05:03:23.449062] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:59.978 [2024-11-18 05:03:23.449073] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.978 05:03:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.237 05:03:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.237 "name": "Existed_Raid", 00:23:00.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.237 "strip_size_kb": 64, 00:23:00.237 "state": "configuring", 00:23:00.237 "raid_level": "raid5f", 00:23:00.237 "superblock": false, 00:23:00.237 "num_base_bdevs": 4, 00:23:00.237 "num_base_bdevs_discovered": 0, 00:23:00.237 "num_base_bdevs_operational": 4, 00:23:00.237 "base_bdevs_list": [ 00:23:00.237 { 00:23:00.237 "name": "BaseBdev1", 00:23:00.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.237 "is_configured": false, 00:23:00.237 "data_offset": 0, 00:23:00.237 "data_size": 0 00:23:00.237 }, 00:23:00.237 { 00:23:00.237 "name": "BaseBdev2", 00:23:00.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.237 "is_configured": false, 00:23:00.237 "data_offset": 0, 00:23:00.237 "data_size": 0 00:23:00.237 }, 00:23:00.237 { 00:23:00.237 "name": "BaseBdev3", 00:23:00.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.237 "is_configured": false, 00:23:00.237 "data_offset": 0, 00:23:00.237 "data_size": 0 00:23:00.237 }, 00:23:00.237 { 00:23:00.237 "name": "BaseBdev4", 00:23:00.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.237 "is_configured": false, 00:23:00.237 "data_offset": 0, 00:23:00.237 "data_size": 0 00:23:00.237 } 00:23:00.237 ] 00:23:00.237 }' 00:23:00.237 05:03:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.237 05:03:23 -- common/autotest_common.sh@10 -- # set +x 00:23:00.495 05:03:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:00.754 [2024-11-18 05:03:24.133021] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:00.754 [2024-11-18 05:03:24.133063] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:23:00.754 05:03:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:01.013 [2024-11-18 05:03:24.377106] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:01.013 [2024-11-18 05:03:24.377158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:01.013 [2024-11-18 05:03:24.377170] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:01.013 [2024-11-18 05:03:24.377183] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:01.013 [2024-11-18 05:03:24.377223] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:01.014 [2024-11-18 05:03:24.377237] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:01.014 [2024-11-18 05:03:24.377245] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:01.014 [2024-11-18 05:03:24.377256] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:01.014 05:03:24 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:01.273 [2024-11-18 05:03:24.585607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:01.273 BaseBdev1 00:23:01.273 05:03:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:01.273 05:03:24 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:01.273 05:03:24 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:01.273 05:03:24 -- common/autotest_common.sh@899 -- # local i 00:23:01.273 05:03:24 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:01.273 05:03:24 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:01.273 05:03:24 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:01.273 05:03:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:01.532 [ 00:23:01.532 { 00:23:01.532 "name": "BaseBdev1", 00:23:01.532 "aliases": [ 00:23:01.532 "e38213de-6c75-42f9-8387-4d6273441bb1" 00:23:01.532 ], 00:23:01.532 "product_name": "Malloc disk", 00:23:01.532 "block_size": 512, 00:23:01.532 "num_blocks": 65536, 00:23:01.532 "uuid": "e38213de-6c75-42f9-8387-4d6273441bb1", 00:23:01.532 "assigned_rate_limits": { 00:23:01.532 "rw_ios_per_sec": 0, 00:23:01.532 "rw_mbytes_per_sec": 0, 00:23:01.532 "r_mbytes_per_sec": 0, 00:23:01.532 "w_mbytes_per_sec": 0 00:23:01.532 }, 00:23:01.532 "claimed": true, 00:23:01.532 "claim_type": "exclusive_write", 00:23:01.532 "zoned": false, 00:23:01.532 "supported_io_types": { 00:23:01.532 "read": true, 00:23:01.532 "write": true, 00:23:01.532 "unmap": true, 00:23:01.532 "write_zeroes": true, 00:23:01.532 "flush": true, 00:23:01.532 "reset": true, 00:23:01.532 "compare": false, 00:23:01.532 "compare_and_write": false, 00:23:01.532 "abort": true, 00:23:01.532 "nvme_admin": false, 00:23:01.532 "nvme_io": false 00:23:01.532 }, 00:23:01.532 "memory_domains": [ 00:23:01.532 { 00:23:01.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.532 "dma_device_type": 2 00:23:01.532 } 00:23:01.532 ], 00:23:01.532 "driver_specific": {} 00:23:01.532 } 00:23:01.532 ] 00:23:01.532 05:03:24 -- common/autotest_common.sh@905 -- # return 0 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.532 05:03:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.791 05:03:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:01.791 "name": "Existed_Raid", 00:23:01.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.791 "strip_size_kb": 64, 00:23:01.791 "state": "configuring", 00:23:01.791 "raid_level": "raid5f", 00:23:01.791 "superblock": false, 00:23:01.791 "num_base_bdevs": 4, 00:23:01.791 "num_base_bdevs_discovered": 1, 00:23:01.791 "num_base_bdevs_operational": 4, 00:23:01.791 "base_bdevs_list": [ 00:23:01.791 { 00:23:01.792 "name": "BaseBdev1", 00:23:01.792 "uuid": "e38213de-6c75-42f9-8387-4d6273441bb1", 00:23:01.792 "is_configured": true, 00:23:01.792 "data_offset": 0, 00:23:01.792 "data_size": 65536 00:23:01.792 }, 00:23:01.792 { 00:23:01.792 "name": "BaseBdev2", 00:23:01.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.792 "is_configured": false, 00:23:01.792 "data_offset": 0, 00:23:01.792 "data_size": 0 00:23:01.792 }, 00:23:01.792 { 00:23:01.792 "name": "BaseBdev3", 00:23:01.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.792 "is_configured": false, 00:23:01.792 "data_offset": 0, 00:23:01.792 "data_size": 0 00:23:01.792 }, 00:23:01.792 { 00:23:01.792 "name": "BaseBdev4", 00:23:01.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.792 "is_configured": false, 00:23:01.792 "data_offset": 0, 00:23:01.792 "data_size": 0 00:23:01.792 } 00:23:01.792 ] 00:23:01.792 }' 00:23:01.792 05:03:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:01.792 05:03:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.051 05:03:25 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:02.310 [2024-11-18 05:03:25.585936] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:02.310 [2024-11-18 05:03:25.585986] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:23:02.310 05:03:25 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:02.310 05:03:25 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:02.310 [2024-11-18 05:03:25.770012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.310 [2024-11-18 05:03:25.771872] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:02.310 [2024-11-18 05:03:25.771919] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:02.311 [2024-11-18 05:03:25.771932] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:02.311 [2024-11-18 05:03:25.771945] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:02.311 [2024-11-18 05:03:25.771952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:02.311 [2024-11-18 05:03:25.771965] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.311 05:03:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.570 05:03:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:02.570 "name": "Existed_Raid", 00:23:02.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.570 "strip_size_kb": 64, 00:23:02.570 "state": "configuring", 00:23:02.570 "raid_level": "raid5f", 00:23:02.570 "superblock": false, 00:23:02.570 "num_base_bdevs": 4, 00:23:02.570 "num_base_bdevs_discovered": 1, 00:23:02.570 "num_base_bdevs_operational": 4, 00:23:02.570 "base_bdevs_list": [ 00:23:02.570 { 00:23:02.570 "name": "BaseBdev1", 00:23:02.570 "uuid": "e38213de-6c75-42f9-8387-4d6273441bb1", 00:23:02.570 "is_configured": true, 00:23:02.570 "data_offset": 0, 00:23:02.570 "data_size": 65536 00:23:02.570 }, 00:23:02.570 { 00:23:02.570 "name": "BaseBdev2", 00:23:02.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.570 "is_configured": false, 00:23:02.570 "data_offset": 0, 00:23:02.570 "data_size": 0 00:23:02.570 }, 00:23:02.570 { 00:23:02.570 "name": "BaseBdev3", 00:23:02.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.570 "is_configured": false, 00:23:02.570 "data_offset": 0, 00:23:02.570 "data_size": 0 00:23:02.570 }, 00:23:02.570 { 00:23:02.570 "name": "BaseBdev4", 00:23:02.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.570 "is_configured": false, 00:23:02.570 "data_offset": 0, 00:23:02.570 "data_size": 0 00:23:02.570 } 00:23:02.570 ] 00:23:02.570 }' 00:23:02.570 05:03:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:02.570 05:03:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.830 05:03:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:03.103 [2024-11-18 05:03:26.432539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:03.103 BaseBdev2 00:23:03.103 05:03:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:03.103 05:03:26 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:03.103 05:03:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:03.103 05:03:26 -- common/autotest_common.sh@899 -- # local i 00:23:03.103 05:03:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:03.103 05:03:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:03.103 05:03:26 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:03.391 05:03:26 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:03.391 [ 00:23:03.391 { 00:23:03.391 "name": "BaseBdev2", 00:23:03.391 "aliases": [ 00:23:03.391 "7112db29-912c-446c-8fcb-30832e70d581" 00:23:03.391 ], 00:23:03.391 "product_name": "Malloc disk", 00:23:03.391 "block_size": 512, 00:23:03.391 "num_blocks": 65536, 00:23:03.391 "uuid": "7112db29-912c-446c-8fcb-30832e70d581", 00:23:03.391 "assigned_rate_limits": { 00:23:03.391 "rw_ios_per_sec": 0, 00:23:03.391 "rw_mbytes_per_sec": 0, 00:23:03.391 "r_mbytes_per_sec": 0, 00:23:03.391 "w_mbytes_per_sec": 0 00:23:03.391 }, 00:23:03.391 "claimed": true, 00:23:03.391 "claim_type": "exclusive_write", 00:23:03.391 "zoned": false, 00:23:03.391 "supported_io_types": { 00:23:03.391 "read": true, 00:23:03.391 "write": true, 00:23:03.391 "unmap": true, 00:23:03.391 "write_zeroes": true, 00:23:03.391 "flush": true, 00:23:03.391 "reset": true, 00:23:03.391 "compare": false, 00:23:03.391 "compare_and_write": false, 00:23:03.391 "abort": true, 00:23:03.391 "nvme_admin": false, 00:23:03.391 "nvme_io": false 00:23:03.391 }, 00:23:03.391 "memory_domains": [ 00:23:03.391 { 00:23:03.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.391 "dma_device_type": 2 00:23:03.391 } 00:23:03.391 ], 00:23:03.391 "driver_specific": {} 00:23:03.391 } 00:23:03.391 ] 00:23:03.391 05:03:26 -- common/autotest_common.sh@905 -- # return 0 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.391 05:03:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.666 05:03:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.666 "name": "Existed_Raid", 00:23:03.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.666 "strip_size_kb": 64, 00:23:03.666 "state": "configuring", 00:23:03.666 "raid_level": "raid5f", 00:23:03.666 "superblock": false, 00:23:03.666 "num_base_bdevs": 4, 00:23:03.666 "num_base_bdevs_discovered": 2, 00:23:03.666 "num_base_bdevs_operational": 4, 00:23:03.666 "base_bdevs_list": [ 00:23:03.666 { 00:23:03.666 "name": "BaseBdev1", 00:23:03.666 "uuid": "e38213de-6c75-42f9-8387-4d6273441bb1", 00:23:03.666 "is_configured": true, 00:23:03.666 "data_offset": 0, 00:23:03.666 "data_size": 65536 00:23:03.666 }, 00:23:03.666 { 00:23:03.666 "name": "BaseBdev2", 00:23:03.666 "uuid": "7112db29-912c-446c-8fcb-30832e70d581", 00:23:03.666 "is_configured": true, 00:23:03.666 "data_offset": 0, 00:23:03.666 "data_size": 65536 00:23:03.666 }, 00:23:03.666 { 00:23:03.666 "name": "BaseBdev3", 00:23:03.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.666 "is_configured": false, 00:23:03.666 "data_offset": 0, 00:23:03.666 "data_size": 0 00:23:03.666 }, 00:23:03.666 { 00:23:03.666 "name": "BaseBdev4", 00:23:03.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.666 "is_configured": false, 00:23:03.666 "data_offset": 0, 00:23:03.666 "data_size": 0 00:23:03.666 } 00:23:03.666 ] 00:23:03.666 }' 00:23:03.666 05:03:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.666 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:23:03.937 05:03:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:04.196 [2024-11-18 05:03:27.589436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:04.196 BaseBdev3 00:23:04.196 05:03:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:04.196 05:03:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:04.196 05:03:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:04.196 05:03:27 -- common/autotest_common.sh@899 -- # local i 00:23:04.196 05:03:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:04.196 05:03:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:04.196 05:03:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:04.455 05:03:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:04.714 [ 00:23:04.714 { 00:23:04.715 "name": "BaseBdev3", 00:23:04.715 "aliases": [ 00:23:04.715 "67d4d2e1-2d69-4d54-ae61-3533e99a2288" 00:23:04.715 ], 00:23:04.715 "product_name": "Malloc disk", 00:23:04.715 "block_size": 512, 00:23:04.715 "num_blocks": 65536, 00:23:04.715 "uuid": "67d4d2e1-2d69-4d54-ae61-3533e99a2288", 00:23:04.715 "assigned_rate_limits": { 00:23:04.715 "rw_ios_per_sec": 0, 00:23:04.715 "rw_mbytes_per_sec": 0, 00:23:04.715 "r_mbytes_per_sec": 0, 00:23:04.715 "w_mbytes_per_sec": 0 00:23:04.715 }, 00:23:04.715 "claimed": true, 00:23:04.715 "claim_type": "exclusive_write", 00:23:04.715 "zoned": false, 00:23:04.715 "supported_io_types": { 00:23:04.715 "read": true, 00:23:04.715 "write": true, 00:23:04.715 "unmap": true, 00:23:04.715 "write_zeroes": true, 00:23:04.715 "flush": true, 00:23:04.715 "reset": true, 00:23:04.715 "compare": false, 00:23:04.715 "compare_and_write": false, 00:23:04.715 "abort": true, 00:23:04.715 "nvme_admin": false, 00:23:04.715 "nvme_io": false 00:23:04.715 }, 00:23:04.715 "memory_domains": [ 00:23:04.715 { 00:23:04.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.715 "dma_device_type": 2 00:23:04.715 } 00:23:04.715 ], 00:23:04.715 "driver_specific": {} 00:23:04.715 } 00:23:04.715 ] 00:23:04.715 05:03:28 -- common/autotest_common.sh@905 -- # return 0 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.715 05:03:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.974 05:03:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:04.974 "name": "Existed_Raid", 00:23:04.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.974 "strip_size_kb": 64, 00:23:04.974 "state": "configuring", 00:23:04.974 "raid_level": "raid5f", 00:23:04.974 "superblock": false, 00:23:04.974 "num_base_bdevs": 4, 00:23:04.974 "num_base_bdevs_discovered": 3, 00:23:04.974 "num_base_bdevs_operational": 4, 00:23:04.974 "base_bdevs_list": [ 00:23:04.974 { 00:23:04.974 "name": "BaseBdev1", 00:23:04.974 "uuid": "e38213de-6c75-42f9-8387-4d6273441bb1", 00:23:04.974 "is_configured": true, 00:23:04.974 "data_offset": 0, 00:23:04.974 "data_size": 65536 00:23:04.974 }, 00:23:04.974 { 00:23:04.974 "name": "BaseBdev2", 00:23:04.974 "uuid": "7112db29-912c-446c-8fcb-30832e70d581", 00:23:04.974 "is_configured": true, 00:23:04.974 "data_offset": 0, 00:23:04.974 "data_size": 65536 00:23:04.974 }, 00:23:04.974 { 00:23:04.974 "name": "BaseBdev3", 00:23:04.974 "uuid": "67d4d2e1-2d69-4d54-ae61-3533e99a2288", 00:23:04.974 "is_configured": true, 00:23:04.974 "data_offset": 0, 00:23:04.974 "data_size": 65536 00:23:04.974 }, 00:23:04.974 { 00:23:04.974 "name": "BaseBdev4", 00:23:04.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.974 "is_configured": false, 00:23:04.974 "data_offset": 0, 00:23:04.974 "data_size": 0 00:23:04.974 } 00:23:04.974 ] 00:23:04.974 }' 00:23:04.974 05:03:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:04.974 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:23:05.234 05:03:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:05.234 [2024-11-18 05:03:28.697019] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:05.234 [2024-11-18 05:03:28.697323] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:23:05.234 [2024-11-18 05:03:28.697382] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:05.234 [2024-11-18 05:03:28.697619] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:23:05.234 [2024-11-18 05:03:28.704578] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:23:05.234 [2024-11-18 05:03:28.704752] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:23:05.234 [2024-11-18 05:03:28.705146] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.234 BaseBdev4 00:23:05.234 05:03:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:23:05.234 05:03:28 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:05.234 05:03:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:05.234 05:03:28 -- common/autotest_common.sh@899 -- # local i 00:23:05.234 05:03:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:05.234 05:03:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:05.234 05:03:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:05.493 05:03:28 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:05.752 [ 00:23:05.752 { 00:23:05.752 "name": "BaseBdev4", 00:23:05.752 "aliases": [ 00:23:05.752 "be299f1a-6114-4c9d-9818-9699053c4adc" 00:23:05.752 ], 00:23:05.752 "product_name": "Malloc disk", 00:23:05.752 "block_size": 512, 00:23:05.752 "num_blocks": 65536, 00:23:05.752 "uuid": "be299f1a-6114-4c9d-9818-9699053c4adc", 00:23:05.752 "assigned_rate_limits": { 00:23:05.752 "rw_ios_per_sec": 0, 00:23:05.752 "rw_mbytes_per_sec": 0, 00:23:05.752 "r_mbytes_per_sec": 0, 00:23:05.752 "w_mbytes_per_sec": 0 00:23:05.752 }, 00:23:05.752 "claimed": true, 00:23:05.752 "claim_type": "exclusive_write", 00:23:05.752 "zoned": false, 00:23:05.752 "supported_io_types": { 00:23:05.752 "read": true, 00:23:05.752 "write": true, 00:23:05.752 "unmap": true, 00:23:05.752 "write_zeroes": true, 00:23:05.752 "flush": true, 00:23:05.752 "reset": true, 00:23:05.752 "compare": false, 00:23:05.752 "compare_and_write": false, 00:23:05.753 "abort": true, 00:23:05.753 "nvme_admin": false, 00:23:05.753 "nvme_io": false 00:23:05.753 }, 00:23:05.753 "memory_domains": [ 00:23:05.753 { 00:23:05.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.753 "dma_device_type": 2 00:23:05.753 } 00:23:05.753 ], 00:23:05.753 "driver_specific": {} 00:23:05.753 } 00:23:05.753 ] 00:23:05.753 05:03:29 -- common/autotest_common.sh@905 -- # return 0 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.753 05:03:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.012 05:03:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:06.012 "name": "Existed_Raid", 00:23:06.012 "uuid": "8d11c7ea-814c-4196-903b-a9178af10929", 00:23:06.012 "strip_size_kb": 64, 00:23:06.012 "state": "online", 00:23:06.012 "raid_level": "raid5f", 00:23:06.012 "superblock": false, 00:23:06.012 "num_base_bdevs": 4, 00:23:06.012 "num_base_bdevs_discovered": 4, 00:23:06.012 "num_base_bdevs_operational": 4, 00:23:06.012 "base_bdevs_list": [ 00:23:06.012 { 00:23:06.012 "name": "BaseBdev1", 00:23:06.012 "uuid": "e38213de-6c75-42f9-8387-4d6273441bb1", 00:23:06.012 "is_configured": true, 00:23:06.012 "data_offset": 0, 00:23:06.012 "data_size": 65536 00:23:06.012 }, 00:23:06.012 { 00:23:06.012 "name": "BaseBdev2", 00:23:06.012 "uuid": "7112db29-912c-446c-8fcb-30832e70d581", 00:23:06.012 "is_configured": true, 00:23:06.012 "data_offset": 0, 00:23:06.012 "data_size": 65536 00:23:06.012 }, 00:23:06.012 { 00:23:06.012 "name": "BaseBdev3", 00:23:06.012 "uuid": "67d4d2e1-2d69-4d54-ae61-3533e99a2288", 00:23:06.012 "is_configured": true, 00:23:06.012 "data_offset": 0, 00:23:06.012 "data_size": 65536 00:23:06.012 }, 00:23:06.012 { 00:23:06.012 "name": "BaseBdev4", 00:23:06.012 "uuid": "be299f1a-6114-4c9d-9818-9699053c4adc", 00:23:06.012 "is_configured": true, 00:23:06.012 "data_offset": 0, 00:23:06.012 "data_size": 65536 00:23:06.012 } 00:23:06.012 ] 00:23:06.012 }' 00:23:06.012 05:03:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:06.012 05:03:29 -- common/autotest_common.sh@10 -- # set +x 00:23:06.271 05:03:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:06.531 [2024-11-18 05:03:29.884215] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.531 05:03:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.790 05:03:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:06.790 "name": "Existed_Raid", 00:23:06.790 "uuid": "8d11c7ea-814c-4196-903b-a9178af10929", 00:23:06.790 "strip_size_kb": 64, 00:23:06.790 "state": "online", 00:23:06.790 "raid_level": "raid5f", 00:23:06.790 "superblock": false, 00:23:06.790 "num_base_bdevs": 4, 00:23:06.790 "num_base_bdevs_discovered": 3, 00:23:06.790 "num_base_bdevs_operational": 3, 00:23:06.790 "base_bdevs_list": [ 00:23:06.790 { 00:23:06.790 "name": null, 00:23:06.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.790 "is_configured": false, 00:23:06.790 "data_offset": 0, 00:23:06.790 "data_size": 65536 00:23:06.790 }, 00:23:06.790 { 00:23:06.790 "name": "BaseBdev2", 00:23:06.790 "uuid": "7112db29-912c-446c-8fcb-30832e70d581", 00:23:06.790 "is_configured": true, 00:23:06.790 "data_offset": 0, 00:23:06.790 "data_size": 65536 00:23:06.790 }, 00:23:06.790 { 00:23:06.790 "name": "BaseBdev3", 00:23:06.790 "uuid": "67d4d2e1-2d69-4d54-ae61-3533e99a2288", 00:23:06.790 "is_configured": true, 00:23:06.790 "data_offset": 0, 00:23:06.790 "data_size": 65536 00:23:06.790 }, 00:23:06.790 { 00:23:06.790 "name": "BaseBdev4", 00:23:06.790 "uuid": "be299f1a-6114-4c9d-9818-9699053c4adc", 00:23:06.790 "is_configured": true, 00:23:06.790 "data_offset": 0, 00:23:06.790 "data_size": 65536 00:23:06.790 } 00:23:06.790 ] 00:23:06.790 }' 00:23:06.790 05:03:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:06.790 05:03:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.050 05:03:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:07.050 05:03:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:07.050 05:03:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.050 05:03:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:07.309 05:03:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:07.309 05:03:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:07.309 05:03:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:07.309 [2024-11-18 05:03:30.771072] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:07.309 [2024-11-18 05:03:30.771281] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:07.309 [2024-11-18 05:03:30.771374] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:07.569 05:03:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:07.569 05:03:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:07.569 05:03:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.569 05:03:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:07.828 05:03:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:07.828 05:03:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:07.829 05:03:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:07.829 [2024-11-18 05:03:31.287339] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:08.088 05:03:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:08.088 05:03:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:08.088 05:03:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.088 05:03:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:08.088 05:03:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:08.088 05:03:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:08.088 05:03:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:08.347 [2024-11-18 05:03:31.720597] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:08.347 [2024-11-18 05:03:31.720653] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:23:08.347 05:03:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:08.347 05:03:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:08.347 05:03:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.347 05:03:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:08.607 05:03:31 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:08.607 05:03:31 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:08.607 05:03:31 -- bdev/bdev_raid.sh@287 -- # killprocess 84609 00:23:08.607 05:03:31 -- common/autotest_common.sh@936 -- # '[' -z 84609 ']' 00:23:08.607 05:03:31 -- common/autotest_common.sh@940 -- # kill -0 84609 00:23:08.607 05:03:31 -- common/autotest_common.sh@941 -- # uname 00:23:08.607 05:03:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:08.607 05:03:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84609 00:23:08.607 killing process with pid 84609 00:23:08.607 05:03:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:08.607 05:03:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:08.607 05:03:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84609' 00:23:08.607 05:03:32 -- common/autotest_common.sh@955 -- # kill 84609 00:23:08.607 [2024-11-18 05:03:32.024965] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:08.607 05:03:32 -- common/autotest_common.sh@960 -- # wait 84609 00:23:08.607 [2024-11-18 05:03:32.025071] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:09.544 00:23:09.544 real 0m10.730s 00:23:09.544 user 0m17.930s 00:23:09.544 sys 0m1.628s 00:23:09.544 05:03:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:09.544 05:03:32 -- common/autotest_common.sh@10 -- # set +x 00:23:09.544 ************************************ 00:23:09.544 END TEST raid5f_state_function_test 00:23:09.544 ************************************ 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:23:09.544 05:03:32 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:23:09.544 05:03:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:09.544 05:03:32 -- common/autotest_common.sh@10 -- # set +x 00:23:09.544 ************************************ 00:23:09.544 START TEST raid5f_state_function_test_sb 00:23:09.544 ************************************ 00:23:09.544 05:03:32 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 true 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:09.544 05:03:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:23:09.545 05:03:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:09.545 05:03:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:09.545 05:03:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=84992 00:23:09.545 Process raid pid: 84992 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 84992' 00:23:09.545 05:03:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 84992 /var/tmp/spdk-raid.sock 00:23:09.545 05:03:33 -- common/autotest_common.sh@829 -- # '[' -z 84992 ']' 00:23:09.545 05:03:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:09.545 05:03:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:09.545 05:03:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:09.545 05:03:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.545 05:03:33 -- common/autotest_common.sh@10 -- # set +x 00:23:09.545 [2024-11-18 05:03:33.063021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:09.545 [2024-11-18 05:03:33.063255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.804 [2024-11-18 05:03:33.234593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.063 [2024-11-18 05:03:33.385602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.063 [2024-11-18 05:03:33.532954] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:10.632 05:03:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.632 05:03:33 -- common/autotest_common.sh@862 -- # return 0 00:23:10.632 05:03:33 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:10.891 [2024-11-18 05:03:34.182903] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:10.891 [2024-11-18 05:03:34.182955] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:10.891 [2024-11-18 05:03:34.182968] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:10.891 [2024-11-18 05:03:34.182980] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:10.891 [2024-11-18 05:03:34.182988] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:10.891 [2024-11-18 05:03:34.182998] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:10.891 [2024-11-18 05:03:34.183005] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:10.891 [2024-11-18 05:03:34.183016] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.891 05:03:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.150 05:03:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:11.150 "name": "Existed_Raid", 00:23:11.150 "uuid": "a70dc5f6-88af-4e52-ae11-a1a448040e1e", 00:23:11.150 "strip_size_kb": 64, 00:23:11.150 "state": "configuring", 00:23:11.150 "raid_level": "raid5f", 00:23:11.150 "superblock": true, 00:23:11.150 "num_base_bdevs": 4, 00:23:11.150 "num_base_bdevs_discovered": 0, 00:23:11.150 "num_base_bdevs_operational": 4, 00:23:11.150 "base_bdevs_list": [ 00:23:11.150 { 00:23:11.150 "name": "BaseBdev1", 00:23:11.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.150 "is_configured": false, 00:23:11.150 "data_offset": 0, 00:23:11.150 "data_size": 0 00:23:11.150 }, 00:23:11.150 { 00:23:11.150 "name": "BaseBdev2", 00:23:11.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.150 "is_configured": false, 00:23:11.150 "data_offset": 0, 00:23:11.150 "data_size": 0 00:23:11.150 }, 00:23:11.150 { 00:23:11.150 "name": "BaseBdev3", 00:23:11.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.150 "is_configured": false, 00:23:11.150 "data_offset": 0, 00:23:11.150 "data_size": 0 00:23:11.150 }, 00:23:11.150 { 00:23:11.150 "name": "BaseBdev4", 00:23:11.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.150 "is_configured": false, 00:23:11.150 "data_offset": 0, 00:23:11.150 "data_size": 0 00:23:11.150 } 00:23:11.150 ] 00:23:11.150 }' 00:23:11.150 05:03:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:11.150 05:03:34 -- common/autotest_common.sh@10 -- # set +x 00:23:11.410 05:03:34 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:11.410 [2024-11-18 05:03:34.922996] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:11.410 [2024-11-18 05:03:34.923041] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:23:11.669 05:03:34 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:11.669 [2024-11-18 05:03:35.099074] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:11.669 [2024-11-18 05:03:35.099138] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:11.669 [2024-11-18 05:03:35.099150] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:11.669 [2024-11-18 05:03:35.099162] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:11.669 [2024-11-18 05:03:35.099170] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:11.669 [2024-11-18 05:03:35.099181] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:11.669 [2024-11-18 05:03:35.099188] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:11.669 [2024-11-18 05:03:35.099199] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:11.669 05:03:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:11.928 [2024-11-18 05:03:35.363569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:11.928 BaseBdev1 00:23:11.928 05:03:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:11.928 05:03:35 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:11.928 05:03:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:11.928 05:03:35 -- common/autotest_common.sh@899 -- # local i 00:23:11.928 05:03:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:11.928 05:03:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:11.928 05:03:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:12.188 05:03:35 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:12.448 [ 00:23:12.448 { 00:23:12.448 "name": "BaseBdev1", 00:23:12.448 "aliases": [ 00:23:12.448 "0b1e5b61-ab0e-400b-b95a-e6f6074ba81e" 00:23:12.448 ], 00:23:12.448 "product_name": "Malloc disk", 00:23:12.448 "block_size": 512, 00:23:12.448 "num_blocks": 65536, 00:23:12.448 "uuid": "0b1e5b61-ab0e-400b-b95a-e6f6074ba81e", 00:23:12.448 "assigned_rate_limits": { 00:23:12.448 "rw_ios_per_sec": 0, 00:23:12.448 "rw_mbytes_per_sec": 0, 00:23:12.448 "r_mbytes_per_sec": 0, 00:23:12.448 "w_mbytes_per_sec": 0 00:23:12.448 }, 00:23:12.448 "claimed": true, 00:23:12.448 "claim_type": "exclusive_write", 00:23:12.448 "zoned": false, 00:23:12.448 "supported_io_types": { 00:23:12.448 "read": true, 00:23:12.448 "write": true, 00:23:12.448 "unmap": true, 00:23:12.448 "write_zeroes": true, 00:23:12.448 "flush": true, 00:23:12.448 "reset": true, 00:23:12.448 "compare": false, 00:23:12.448 "compare_and_write": false, 00:23:12.448 "abort": true, 00:23:12.448 "nvme_admin": false, 00:23:12.448 "nvme_io": false 00:23:12.448 }, 00:23:12.448 "memory_domains": [ 00:23:12.448 { 00:23:12.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.448 "dma_device_type": 2 00:23:12.448 } 00:23:12.448 ], 00:23:12.448 "driver_specific": {} 00:23:12.448 } 00:23:12.448 ] 00:23:12.448 05:03:35 -- common/autotest_common.sh@905 -- # return 0 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:12.448 "name": "Existed_Raid", 00:23:12.448 "uuid": "a4302a65-bc0e-4300-ac00-07a58f99da5b", 00:23:12.448 "strip_size_kb": 64, 00:23:12.448 "state": "configuring", 00:23:12.448 "raid_level": "raid5f", 00:23:12.448 "superblock": true, 00:23:12.448 "num_base_bdevs": 4, 00:23:12.448 "num_base_bdevs_discovered": 1, 00:23:12.448 "num_base_bdevs_operational": 4, 00:23:12.448 "base_bdevs_list": [ 00:23:12.448 { 00:23:12.448 "name": "BaseBdev1", 00:23:12.448 "uuid": "0b1e5b61-ab0e-400b-b95a-e6f6074ba81e", 00:23:12.448 "is_configured": true, 00:23:12.448 "data_offset": 2048, 00:23:12.448 "data_size": 63488 00:23:12.448 }, 00:23:12.448 { 00:23:12.448 "name": "BaseBdev2", 00:23:12.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.448 "is_configured": false, 00:23:12.448 "data_offset": 0, 00:23:12.448 "data_size": 0 00:23:12.448 }, 00:23:12.448 { 00:23:12.448 "name": "BaseBdev3", 00:23:12.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.448 "is_configured": false, 00:23:12.448 "data_offset": 0, 00:23:12.448 "data_size": 0 00:23:12.448 }, 00:23:12.448 { 00:23:12.448 "name": "BaseBdev4", 00:23:12.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.448 "is_configured": false, 00:23:12.448 "data_offset": 0, 00:23:12.448 "data_size": 0 00:23:12.448 } 00:23:12.448 ] 00:23:12.448 }' 00:23:12.448 05:03:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:12.448 05:03:35 -- common/autotest_common.sh@10 -- # set +x 00:23:12.707 05:03:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:12.966 [2024-11-18 05:03:36.379836] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:12.966 [2024-11-18 05:03:36.379887] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:23:12.966 05:03:36 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:12.966 05:03:36 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:13.226 05:03:36 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:13.485 BaseBdev1 00:23:13.485 05:03:36 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:13.485 05:03:36 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:13.485 05:03:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:13.485 05:03:36 -- common/autotest_common.sh@899 -- # local i 00:23:13.485 05:03:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:13.485 05:03:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:13.485 05:03:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:13.745 05:03:37 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:13.745 [ 00:23:13.745 { 00:23:13.745 "name": "BaseBdev1", 00:23:13.745 "aliases": [ 00:23:13.745 "2fc97105-e9f9-44dd-a24c-ba8635c7925d" 00:23:13.745 ], 00:23:13.745 "product_name": "Malloc disk", 00:23:13.745 "block_size": 512, 00:23:13.745 "num_blocks": 65536, 00:23:13.745 "uuid": "2fc97105-e9f9-44dd-a24c-ba8635c7925d", 00:23:13.745 "assigned_rate_limits": { 00:23:13.745 "rw_ios_per_sec": 0, 00:23:13.745 "rw_mbytes_per_sec": 0, 00:23:13.745 "r_mbytes_per_sec": 0, 00:23:13.745 "w_mbytes_per_sec": 0 00:23:13.745 }, 00:23:13.745 "claimed": false, 00:23:13.745 "zoned": false, 00:23:13.745 "supported_io_types": { 00:23:13.745 "read": true, 00:23:13.745 "write": true, 00:23:13.745 "unmap": true, 00:23:13.745 "write_zeroes": true, 00:23:13.745 "flush": true, 00:23:13.745 "reset": true, 00:23:13.745 "compare": false, 00:23:13.745 "compare_and_write": false, 00:23:13.745 "abort": true, 00:23:13.745 "nvme_admin": false, 00:23:13.745 "nvme_io": false 00:23:13.745 }, 00:23:13.745 "memory_domains": [ 00:23:13.745 { 00:23:13.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.745 "dma_device_type": 2 00:23:13.745 } 00:23:13.745 ], 00:23:13.745 "driver_specific": {} 00:23:13.745 } 00:23:13.745 ] 00:23:13.745 05:03:37 -- common/autotest_common.sh@905 -- # return 0 00:23:13.745 05:03:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:14.004 [2024-11-18 05:03:37.373666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:14.004 [2024-11-18 05:03:37.375468] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:14.004 [2024-11-18 05:03:37.375545] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:14.004 [2024-11-18 05:03:37.375573] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:14.004 [2024-11-18 05:03:37.375588] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:14.004 [2024-11-18 05:03:37.375595] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:14.004 [2024-11-18 05:03:37.375608] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.004 05:03:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.263 05:03:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:14.263 "name": "Existed_Raid", 00:23:14.263 "uuid": "6f465ce1-ea24-4202-9fa3-b201dbe1708e", 00:23:14.263 "strip_size_kb": 64, 00:23:14.263 "state": "configuring", 00:23:14.263 "raid_level": "raid5f", 00:23:14.263 "superblock": true, 00:23:14.263 "num_base_bdevs": 4, 00:23:14.263 "num_base_bdevs_discovered": 1, 00:23:14.263 "num_base_bdevs_operational": 4, 00:23:14.263 "base_bdevs_list": [ 00:23:14.263 { 00:23:14.263 "name": "BaseBdev1", 00:23:14.263 "uuid": "2fc97105-e9f9-44dd-a24c-ba8635c7925d", 00:23:14.263 "is_configured": true, 00:23:14.263 "data_offset": 2048, 00:23:14.263 "data_size": 63488 00:23:14.263 }, 00:23:14.263 { 00:23:14.263 "name": "BaseBdev2", 00:23:14.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.263 "is_configured": false, 00:23:14.263 "data_offset": 0, 00:23:14.263 "data_size": 0 00:23:14.263 }, 00:23:14.263 { 00:23:14.263 "name": "BaseBdev3", 00:23:14.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.263 "is_configured": false, 00:23:14.263 "data_offset": 0, 00:23:14.263 "data_size": 0 00:23:14.263 }, 00:23:14.263 { 00:23:14.263 "name": "BaseBdev4", 00:23:14.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.263 "is_configured": false, 00:23:14.263 "data_offset": 0, 00:23:14.263 "data_size": 0 00:23:14.263 } 00:23:14.263 ] 00:23:14.263 }' 00:23:14.263 05:03:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:14.263 05:03:37 -- common/autotest_common.sh@10 -- # set +x 00:23:14.523 05:03:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:14.783 [2024-11-18 05:03:38.108429] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:14.783 BaseBdev2 00:23:14.783 05:03:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:14.783 05:03:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:14.783 05:03:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:14.783 05:03:38 -- common/autotest_common.sh@899 -- # local i 00:23:14.783 05:03:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:14.783 05:03:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:14.783 05:03:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:15.043 05:03:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:15.043 [ 00:23:15.043 { 00:23:15.043 "name": "BaseBdev2", 00:23:15.043 "aliases": [ 00:23:15.043 "d759ccb7-fb23-4f79-820d-07f50148df38" 00:23:15.043 ], 00:23:15.043 "product_name": "Malloc disk", 00:23:15.043 "block_size": 512, 00:23:15.043 "num_blocks": 65536, 00:23:15.043 "uuid": "d759ccb7-fb23-4f79-820d-07f50148df38", 00:23:15.043 "assigned_rate_limits": { 00:23:15.043 "rw_ios_per_sec": 0, 00:23:15.043 "rw_mbytes_per_sec": 0, 00:23:15.043 "r_mbytes_per_sec": 0, 00:23:15.043 "w_mbytes_per_sec": 0 00:23:15.043 }, 00:23:15.043 "claimed": true, 00:23:15.043 "claim_type": "exclusive_write", 00:23:15.043 "zoned": false, 00:23:15.043 "supported_io_types": { 00:23:15.043 "read": true, 00:23:15.043 "write": true, 00:23:15.043 "unmap": true, 00:23:15.043 "write_zeroes": true, 00:23:15.043 "flush": true, 00:23:15.043 "reset": true, 00:23:15.043 "compare": false, 00:23:15.043 "compare_and_write": false, 00:23:15.043 "abort": true, 00:23:15.043 "nvme_admin": false, 00:23:15.043 "nvme_io": false 00:23:15.043 }, 00:23:15.043 "memory_domains": [ 00:23:15.043 { 00:23:15.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.043 "dma_device_type": 2 00:23:15.043 } 00:23:15.043 ], 00:23:15.043 "driver_specific": {} 00:23:15.043 } 00:23:15.043 ] 00:23:15.043 05:03:38 -- common/autotest_common.sh@905 -- # return 0 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.043 05:03:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.303 05:03:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:15.303 "name": "Existed_Raid", 00:23:15.303 "uuid": "6f465ce1-ea24-4202-9fa3-b201dbe1708e", 00:23:15.303 "strip_size_kb": 64, 00:23:15.303 "state": "configuring", 00:23:15.303 "raid_level": "raid5f", 00:23:15.303 "superblock": true, 00:23:15.303 "num_base_bdevs": 4, 00:23:15.303 "num_base_bdevs_discovered": 2, 00:23:15.303 "num_base_bdevs_operational": 4, 00:23:15.303 "base_bdevs_list": [ 00:23:15.303 { 00:23:15.303 "name": "BaseBdev1", 00:23:15.303 "uuid": "2fc97105-e9f9-44dd-a24c-ba8635c7925d", 00:23:15.303 "is_configured": true, 00:23:15.303 "data_offset": 2048, 00:23:15.303 "data_size": 63488 00:23:15.303 }, 00:23:15.303 { 00:23:15.303 "name": "BaseBdev2", 00:23:15.303 "uuid": "d759ccb7-fb23-4f79-820d-07f50148df38", 00:23:15.303 "is_configured": true, 00:23:15.303 "data_offset": 2048, 00:23:15.303 "data_size": 63488 00:23:15.303 }, 00:23:15.303 { 00:23:15.303 "name": "BaseBdev3", 00:23:15.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.303 "is_configured": false, 00:23:15.303 "data_offset": 0, 00:23:15.303 "data_size": 0 00:23:15.303 }, 00:23:15.303 { 00:23:15.303 "name": "BaseBdev4", 00:23:15.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.303 "is_configured": false, 00:23:15.303 "data_offset": 0, 00:23:15.303 "data_size": 0 00:23:15.303 } 00:23:15.303 ] 00:23:15.303 }' 00:23:15.304 05:03:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:15.304 05:03:38 -- common/autotest_common.sh@10 -- # set +x 00:23:15.563 05:03:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:15.822 [2024-11-18 05:03:39.239682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:15.822 BaseBdev3 00:23:15.822 05:03:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:15.823 05:03:39 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:15.823 05:03:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:15.823 05:03:39 -- common/autotest_common.sh@899 -- # local i 00:23:15.823 05:03:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:15.823 05:03:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:15.823 05:03:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:16.082 05:03:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:16.341 [ 00:23:16.341 { 00:23:16.341 "name": "BaseBdev3", 00:23:16.341 "aliases": [ 00:23:16.341 "7ff2eb95-6633-4e77-b19e-28e660b39868" 00:23:16.341 ], 00:23:16.341 "product_name": "Malloc disk", 00:23:16.341 "block_size": 512, 00:23:16.341 "num_blocks": 65536, 00:23:16.341 "uuid": "7ff2eb95-6633-4e77-b19e-28e660b39868", 00:23:16.341 "assigned_rate_limits": { 00:23:16.341 "rw_ios_per_sec": 0, 00:23:16.341 "rw_mbytes_per_sec": 0, 00:23:16.341 "r_mbytes_per_sec": 0, 00:23:16.341 "w_mbytes_per_sec": 0 00:23:16.341 }, 00:23:16.341 "claimed": true, 00:23:16.341 "claim_type": "exclusive_write", 00:23:16.341 "zoned": false, 00:23:16.341 "supported_io_types": { 00:23:16.341 "read": true, 00:23:16.341 "write": true, 00:23:16.341 "unmap": true, 00:23:16.341 "write_zeroes": true, 00:23:16.341 "flush": true, 00:23:16.341 "reset": true, 00:23:16.341 "compare": false, 00:23:16.341 "compare_and_write": false, 00:23:16.341 "abort": true, 00:23:16.341 "nvme_admin": false, 00:23:16.341 "nvme_io": false 00:23:16.341 }, 00:23:16.341 "memory_domains": [ 00:23:16.341 { 00:23:16.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.341 "dma_device_type": 2 00:23:16.341 } 00:23:16.341 ], 00:23:16.341 "driver_specific": {} 00:23:16.341 } 00:23:16.341 ] 00:23:16.341 05:03:39 -- common/autotest_common.sh@905 -- # return 0 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.341 "name": "Existed_Raid", 00:23:16.341 "uuid": "6f465ce1-ea24-4202-9fa3-b201dbe1708e", 00:23:16.341 "strip_size_kb": 64, 00:23:16.341 "state": "configuring", 00:23:16.341 "raid_level": "raid5f", 00:23:16.341 "superblock": true, 00:23:16.341 "num_base_bdevs": 4, 00:23:16.341 "num_base_bdevs_discovered": 3, 00:23:16.341 "num_base_bdevs_operational": 4, 00:23:16.341 "base_bdevs_list": [ 00:23:16.341 { 00:23:16.341 "name": "BaseBdev1", 00:23:16.341 "uuid": "2fc97105-e9f9-44dd-a24c-ba8635c7925d", 00:23:16.341 "is_configured": true, 00:23:16.341 "data_offset": 2048, 00:23:16.341 "data_size": 63488 00:23:16.341 }, 00:23:16.341 { 00:23:16.341 "name": "BaseBdev2", 00:23:16.341 "uuid": "d759ccb7-fb23-4f79-820d-07f50148df38", 00:23:16.341 "is_configured": true, 00:23:16.341 "data_offset": 2048, 00:23:16.341 "data_size": 63488 00:23:16.341 }, 00:23:16.341 { 00:23:16.341 "name": "BaseBdev3", 00:23:16.341 "uuid": "7ff2eb95-6633-4e77-b19e-28e660b39868", 00:23:16.341 "is_configured": true, 00:23:16.341 "data_offset": 2048, 00:23:16.341 "data_size": 63488 00:23:16.341 }, 00:23:16.341 { 00:23:16.341 "name": "BaseBdev4", 00:23:16.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.341 "is_configured": false, 00:23:16.341 "data_offset": 0, 00:23:16.341 "data_size": 0 00:23:16.341 } 00:23:16.341 ] 00:23:16.341 }' 00:23:16.341 05:03:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.341 05:03:39 -- common/autotest_common.sh@10 -- # set +x 00:23:16.600 05:03:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:16.860 [2024-11-18 05:03:40.379162] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:16.860 [2024-11-18 05:03:40.379439] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:23:16.860 [2024-11-18 05:03:40.379457] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:16.860 [2024-11-18 05:03:40.379641] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:23:16.860 BaseBdev4 00:23:17.120 [2024-11-18 05:03:40.386472] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:23:17.120 [2024-11-18 05:03:40.386517] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:23:17.120 [2024-11-18 05:03:40.386721] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.120 05:03:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:23:17.120 05:03:40 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:17.120 05:03:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:17.120 05:03:40 -- common/autotest_common.sh@899 -- # local i 00:23:17.120 05:03:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:17.120 05:03:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:17.120 05:03:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:17.120 05:03:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:17.379 [ 00:23:17.380 { 00:23:17.380 "name": "BaseBdev4", 00:23:17.380 "aliases": [ 00:23:17.380 "81623f42-5930-4172-b613-0d56723cbd6a" 00:23:17.380 ], 00:23:17.380 "product_name": "Malloc disk", 00:23:17.380 "block_size": 512, 00:23:17.380 "num_blocks": 65536, 00:23:17.380 "uuid": "81623f42-5930-4172-b613-0d56723cbd6a", 00:23:17.380 "assigned_rate_limits": { 00:23:17.380 "rw_ios_per_sec": 0, 00:23:17.380 "rw_mbytes_per_sec": 0, 00:23:17.380 "r_mbytes_per_sec": 0, 00:23:17.380 "w_mbytes_per_sec": 0 00:23:17.380 }, 00:23:17.380 "claimed": true, 00:23:17.380 "claim_type": "exclusive_write", 00:23:17.380 "zoned": false, 00:23:17.380 "supported_io_types": { 00:23:17.380 "read": true, 00:23:17.380 "write": true, 00:23:17.380 "unmap": true, 00:23:17.380 "write_zeroes": true, 00:23:17.380 "flush": true, 00:23:17.380 "reset": true, 00:23:17.380 "compare": false, 00:23:17.380 "compare_and_write": false, 00:23:17.380 "abort": true, 00:23:17.380 "nvme_admin": false, 00:23:17.380 "nvme_io": false 00:23:17.380 }, 00:23:17.380 "memory_domains": [ 00:23:17.380 { 00:23:17.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.380 "dma_device_type": 2 00:23:17.380 } 00:23:17.380 ], 00:23:17.380 "driver_specific": {} 00:23:17.380 } 00:23:17.380 ] 00:23:17.380 05:03:40 -- common/autotest_common.sh@905 -- # return 0 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.380 05:03:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.640 05:03:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:17.640 "name": "Existed_Raid", 00:23:17.640 "uuid": "6f465ce1-ea24-4202-9fa3-b201dbe1708e", 00:23:17.640 "strip_size_kb": 64, 00:23:17.640 "state": "online", 00:23:17.640 "raid_level": "raid5f", 00:23:17.640 "superblock": true, 00:23:17.640 "num_base_bdevs": 4, 00:23:17.640 "num_base_bdevs_discovered": 4, 00:23:17.640 "num_base_bdevs_operational": 4, 00:23:17.640 "base_bdevs_list": [ 00:23:17.640 { 00:23:17.640 "name": "BaseBdev1", 00:23:17.640 "uuid": "2fc97105-e9f9-44dd-a24c-ba8635c7925d", 00:23:17.640 "is_configured": true, 00:23:17.640 "data_offset": 2048, 00:23:17.640 "data_size": 63488 00:23:17.640 }, 00:23:17.640 { 00:23:17.640 "name": "BaseBdev2", 00:23:17.640 "uuid": "d759ccb7-fb23-4f79-820d-07f50148df38", 00:23:17.640 "is_configured": true, 00:23:17.640 "data_offset": 2048, 00:23:17.640 "data_size": 63488 00:23:17.640 }, 00:23:17.640 { 00:23:17.640 "name": "BaseBdev3", 00:23:17.640 "uuid": "7ff2eb95-6633-4e77-b19e-28e660b39868", 00:23:17.640 "is_configured": true, 00:23:17.640 "data_offset": 2048, 00:23:17.640 "data_size": 63488 00:23:17.640 }, 00:23:17.640 { 00:23:17.640 "name": "BaseBdev4", 00:23:17.640 "uuid": "81623f42-5930-4172-b613-0d56723cbd6a", 00:23:17.640 "is_configured": true, 00:23:17.640 "data_offset": 2048, 00:23:17.640 "data_size": 63488 00:23:17.640 } 00:23:17.640 ] 00:23:17.640 }' 00:23:17.640 05:03:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:17.640 05:03:41 -- common/autotest_common.sh@10 -- # set +x 00:23:17.899 05:03:41 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:18.157 [2024-11-18 05:03:41.433601] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.157 05:03:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.416 05:03:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:18.416 "name": "Existed_Raid", 00:23:18.416 "uuid": "6f465ce1-ea24-4202-9fa3-b201dbe1708e", 00:23:18.416 "strip_size_kb": 64, 00:23:18.416 "state": "online", 00:23:18.416 "raid_level": "raid5f", 00:23:18.416 "superblock": true, 00:23:18.416 "num_base_bdevs": 4, 00:23:18.416 "num_base_bdevs_discovered": 3, 00:23:18.416 "num_base_bdevs_operational": 3, 00:23:18.416 "base_bdevs_list": [ 00:23:18.416 { 00:23:18.416 "name": null, 00:23:18.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.416 "is_configured": false, 00:23:18.416 "data_offset": 2048, 00:23:18.416 "data_size": 63488 00:23:18.416 }, 00:23:18.416 { 00:23:18.416 "name": "BaseBdev2", 00:23:18.416 "uuid": "d759ccb7-fb23-4f79-820d-07f50148df38", 00:23:18.416 "is_configured": true, 00:23:18.416 "data_offset": 2048, 00:23:18.416 "data_size": 63488 00:23:18.416 }, 00:23:18.416 { 00:23:18.416 "name": "BaseBdev3", 00:23:18.416 "uuid": "7ff2eb95-6633-4e77-b19e-28e660b39868", 00:23:18.416 "is_configured": true, 00:23:18.416 "data_offset": 2048, 00:23:18.416 "data_size": 63488 00:23:18.416 }, 00:23:18.416 { 00:23:18.416 "name": "BaseBdev4", 00:23:18.416 "uuid": "81623f42-5930-4172-b613-0d56723cbd6a", 00:23:18.416 "is_configured": true, 00:23:18.416 "data_offset": 2048, 00:23:18.416 "data_size": 63488 00:23:18.416 } 00:23:18.416 ] 00:23:18.416 }' 00:23:18.416 05:03:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:18.416 05:03:41 -- common/autotest_common.sh@10 -- # set +x 00:23:18.675 05:03:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:18.675 05:03:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:18.675 05:03:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:18.675 05:03:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.934 05:03:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:18.934 05:03:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:18.934 05:03:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:18.934 [2024-11-18 05:03:42.391422] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:18.934 [2024-11-18 05:03:42.391458] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:18.934 [2024-11-18 05:03:42.391518] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:19.193 05:03:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:19.193 05:03:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:19.193 05:03:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.193 05:03:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:19.193 05:03:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:19.193 05:03:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:19.193 05:03:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:19.452 [2024-11-18 05:03:42.797519] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:19.452 05:03:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:19.452 05:03:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:19.452 05:03:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.452 05:03:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:19.711 05:03:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:19.711 05:03:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:19.711 05:03:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:19.711 [2024-11-18 05:03:43.215058] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:19.711 [2024-11-18 05:03:43.215133] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:23:19.971 05:03:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:19.971 05:03:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:19.971 05:03:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.971 05:03:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:20.230 05:03:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:20.230 05:03:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:20.230 05:03:43 -- bdev/bdev_raid.sh@287 -- # killprocess 84992 00:23:20.230 05:03:43 -- common/autotest_common.sh@936 -- # '[' -z 84992 ']' 00:23:20.230 05:03:43 -- common/autotest_common.sh@940 -- # kill -0 84992 00:23:20.230 05:03:43 -- common/autotest_common.sh@941 -- # uname 00:23:20.230 05:03:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.230 05:03:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84992 00:23:20.230 killing process with pid 84992 00:23:20.230 05:03:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:20.230 05:03:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:20.230 05:03:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84992' 00:23:20.230 05:03:43 -- common/autotest_common.sh@955 -- # kill 84992 00:23:20.230 [2024-11-18 05:03:43.570025] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:20.230 05:03:43 -- common/autotest_common.sh@960 -- # wait 84992 00:23:20.230 [2024-11-18 05:03:43.570128] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:21.166 ************************************ 00:23:21.166 END TEST raid5f_state_function_test_sb 00:23:21.166 ************************************ 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:21.166 00:23:21.166 real 0m11.492s 00:23:21.166 user 0m19.296s 00:23:21.166 sys 0m1.695s 00:23:21.166 05:03:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:21.166 05:03:44 -- common/autotest_common.sh@10 -- # set +x 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:23:21.166 05:03:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:23:21.166 05:03:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:21.166 05:03:44 -- common/autotest_common.sh@10 -- # set +x 00:23:21.166 ************************************ 00:23:21.166 START TEST raid5f_superblock_test 00:23:21.166 ************************************ 00:23:21.166 05:03:44 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 4 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:21.166 05:03:44 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:21.167 05:03:44 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:21.167 05:03:44 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:21.167 05:03:44 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:21.167 05:03:44 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:21.167 05:03:44 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:21.167 05:03:44 -- bdev/bdev_raid.sh@357 -- # raid_pid=85382 00:23:21.167 05:03:44 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:21.167 05:03:44 -- bdev/bdev_raid.sh@358 -- # waitforlisten 85382 /var/tmp/spdk-raid.sock 00:23:21.167 05:03:44 -- common/autotest_common.sh@829 -- # '[' -z 85382 ']' 00:23:21.167 05:03:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:21.167 05:03:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.167 05:03:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:21.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:21.167 05:03:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.167 05:03:44 -- common/autotest_common.sh@10 -- # set +x 00:23:21.167 [2024-11-18 05:03:44.604745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:21.167 [2024-11-18 05:03:44.604907] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85382 ] 00:23:21.426 [2024-11-18 05:03:44.779226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.685 [2024-11-18 05:03:44.996481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.685 [2024-11-18 05:03:45.143297] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:21.943 05:03:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.943 05:03:45 -- common/autotest_common.sh@862 -- # return 0 00:23:21.943 05:03:45 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:21.943 05:03:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:21.943 05:03:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:21.943 05:03:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:21.943 05:03:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:21.943 05:03:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:21.943 05:03:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:21.944 05:03:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:21.944 05:03:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:22.203 malloc1 00:23:22.203 05:03:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:22.462 [2024-11-18 05:03:45.889055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:22.462 [2024-11-18 05:03:45.889138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.462 [2024-11-18 05:03:45.889174] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:23:22.462 [2024-11-18 05:03:45.889187] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.462 [2024-11-18 05:03:45.891395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.462 [2024-11-18 05:03:45.891448] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:22.462 pt1 00:23:22.462 05:03:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:22.462 05:03:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:22.462 05:03:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:22.462 05:03:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:22.462 05:03:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:22.462 05:03:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:22.462 05:03:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:22.462 05:03:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:22.462 05:03:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:22.722 malloc2 00:23:22.722 05:03:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:22.981 [2024-11-18 05:03:46.275363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:22.981 [2024-11-18 05:03:46.275453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.981 [2024-11-18 05:03:46.275482] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:23:22.981 [2024-11-18 05:03:46.275495] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.981 [2024-11-18 05:03:46.277631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.981 [2024-11-18 05:03:46.277683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:22.981 pt2 00:23:22.981 05:03:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:22.981 05:03:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:22.981 05:03:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:22.981 05:03:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:22.981 05:03:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:22.981 05:03:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:22.981 05:03:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:22.981 05:03:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:22.981 05:03:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:22.981 malloc3 00:23:22.981 05:03:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:23.241 [2024-11-18 05:03:46.643463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:23.241 [2024-11-18 05:03:46.643551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.241 [2024-11-18 05:03:46.643580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:23:23.241 [2024-11-18 05:03:46.643609] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.241 [2024-11-18 05:03:46.645685] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.241 [2024-11-18 05:03:46.645722] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:23.241 pt3 00:23:23.241 05:03:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:23.241 05:03:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:23.241 05:03:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:23:23.241 05:03:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:23:23.241 05:03:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:23.241 05:03:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:23.241 05:03:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:23.241 05:03:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:23.241 05:03:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:23.501 malloc4 00:23:23.501 05:03:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:23.760 [2024-11-18 05:03:47.033449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:23.760 [2024-11-18 05:03:47.033526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.760 [2024-11-18 05:03:47.033560] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:23:23.760 [2024-11-18 05:03:47.033573] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.760 [2024-11-18 05:03:47.035951] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.760 [2024-11-18 05:03:47.036005] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:23.760 pt4 00:23:23.760 05:03:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:23.760 05:03:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:23.760 05:03:47 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:23.760 [2024-11-18 05:03:47.209529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:23.760 [2024-11-18 05:03:47.211292] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:23.760 [2024-11-18 05:03:47.211387] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:23.760 [2024-11-18 05:03:47.211447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:23.760 [2024-11-18 05:03:47.211683] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:23:23.760 [2024-11-18 05:03:47.211699] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:23.760 [2024-11-18 05:03:47.211801] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:23:23.760 [2024-11-18 05:03:47.217364] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:23:23.760 [2024-11-18 05:03:47.217396] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:23:23.760 [2024-11-18 05:03:47.217627] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.760 05:03:47 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:23.760 05:03:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.760 05:03:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.760 05:03:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.760 05:03:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.761 05:03:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:23.761 05:03:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.761 05:03:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.761 05:03:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.761 05:03:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.761 05:03:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.761 05:03:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.020 05:03:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:24.020 "name": "raid_bdev1", 00:23:24.020 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:24.020 "strip_size_kb": 64, 00:23:24.020 "state": "online", 00:23:24.020 "raid_level": "raid5f", 00:23:24.020 "superblock": true, 00:23:24.020 "num_base_bdevs": 4, 00:23:24.020 "num_base_bdevs_discovered": 4, 00:23:24.020 "num_base_bdevs_operational": 4, 00:23:24.020 "base_bdevs_list": [ 00:23:24.020 { 00:23:24.020 "name": "pt1", 00:23:24.020 "uuid": "746f6aa4-de71-5376-9239-9ac12f31e3ec", 00:23:24.020 "is_configured": true, 00:23:24.020 "data_offset": 2048, 00:23:24.020 "data_size": 63488 00:23:24.020 }, 00:23:24.020 { 00:23:24.020 "name": "pt2", 00:23:24.020 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:24.020 "is_configured": true, 00:23:24.020 "data_offset": 2048, 00:23:24.020 "data_size": 63488 00:23:24.020 }, 00:23:24.020 { 00:23:24.020 "name": "pt3", 00:23:24.020 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:24.020 "is_configured": true, 00:23:24.020 "data_offset": 2048, 00:23:24.020 "data_size": 63488 00:23:24.020 }, 00:23:24.020 { 00:23:24.020 "name": "pt4", 00:23:24.020 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:24.020 "is_configured": true, 00:23:24.020 "data_offset": 2048, 00:23:24.020 "data_size": 63488 00:23:24.020 } 00:23:24.020 ] 00:23:24.020 }' 00:23:24.020 05:03:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:24.020 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:24.279 05:03:47 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:24.279 05:03:47 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:24.539 [2024-11-18 05:03:47.895552] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.539 05:03:47 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=9f204e8b-bd2c-492f-84d8-cf3be67b27cd 00:23:24.539 05:03:47 -- bdev/bdev_raid.sh@380 -- # '[' -z 9f204e8b-bd2c-492f-84d8-cf3be67b27cd ']' 00:23:24.539 05:03:47 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:24.798 [2024-11-18 05:03:48.159463] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:24.798 [2024-11-18 05:03:48.159513] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.798 [2024-11-18 05:03:48.159589] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.798 [2024-11-18 05:03:48.159679] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.798 [2024-11-18 05:03:48.159692] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:23:24.798 05:03:48 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.798 05:03:48 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:25.057 05:03:48 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:25.057 05:03:48 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:25.057 05:03:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:25.057 05:03:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:25.057 05:03:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:25.057 05:03:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:25.316 05:03:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:25.316 05:03:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:25.575 05:03:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:25.575 05:03:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:25.575 05:03:49 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:25.575 05:03:49 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:25.853 05:03:49 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:25.853 05:03:49 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:25.853 05:03:49 -- common/autotest_common.sh@650 -- # local es=0 00:23:25.853 05:03:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:25.853 05:03:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:25.853 05:03:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.853 05:03:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:25.853 05:03:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.853 05:03:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:25.854 05:03:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.854 05:03:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:25.854 05:03:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:25.854 05:03:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:26.119 [2024-11-18 05:03:49.539748] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:26.119 [2024-11-18 05:03:49.541602] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:26.119 [2024-11-18 05:03:49.541684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:26.119 [2024-11-18 05:03:49.541722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:26.119 [2024-11-18 05:03:49.541793] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:26.119 [2024-11-18 05:03:49.541900] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:26.119 [2024-11-18 05:03:49.541933] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:26.119 [2024-11-18 05:03:49.541957] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:23:26.119 [2024-11-18 05:03:49.541978] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:26.119 [2024-11-18 05:03:49.541990] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:23:26.119 request: 00:23:26.119 { 00:23:26.119 "name": "raid_bdev1", 00:23:26.119 "raid_level": "raid5f", 00:23:26.119 "base_bdevs": [ 00:23:26.119 "malloc1", 00:23:26.119 "malloc2", 00:23:26.119 "malloc3", 00:23:26.119 "malloc4" 00:23:26.119 ], 00:23:26.119 "superblock": false, 00:23:26.119 "strip_size_kb": 64, 00:23:26.119 "method": "bdev_raid_create", 00:23:26.119 "req_id": 1 00:23:26.119 } 00:23:26.119 Got JSON-RPC error response 00:23:26.119 response: 00:23:26.119 { 00:23:26.119 "code": -17, 00:23:26.119 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:26.119 } 00:23:26.119 05:03:49 -- common/autotest_common.sh@653 -- # es=1 00:23:26.119 05:03:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.119 05:03:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.119 05:03:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.119 05:03:49 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.119 05:03:49 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:26.379 05:03:49 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:26.379 05:03:49 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:26.379 05:03:49 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:26.638 [2024-11-18 05:03:49.927793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:26.638 [2024-11-18 05:03:49.927872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.638 [2024-11-18 05:03:49.927901] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:23:26.638 [2024-11-18 05:03:49.927914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.638 [2024-11-18 05:03:49.930386] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.638 [2024-11-18 05:03:49.930439] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:26.638 [2024-11-18 05:03:49.930533] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:26.638 [2024-11-18 05:03:49.930591] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:26.638 pt1 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.638 05:03:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.897 05:03:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:26.897 "name": "raid_bdev1", 00:23:26.897 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:26.897 "strip_size_kb": 64, 00:23:26.897 "state": "configuring", 00:23:26.897 "raid_level": "raid5f", 00:23:26.897 "superblock": true, 00:23:26.897 "num_base_bdevs": 4, 00:23:26.897 "num_base_bdevs_discovered": 1, 00:23:26.897 "num_base_bdevs_operational": 4, 00:23:26.897 "base_bdevs_list": [ 00:23:26.897 { 00:23:26.897 "name": "pt1", 00:23:26.897 "uuid": "746f6aa4-de71-5376-9239-9ac12f31e3ec", 00:23:26.897 "is_configured": true, 00:23:26.897 "data_offset": 2048, 00:23:26.897 "data_size": 63488 00:23:26.897 }, 00:23:26.897 { 00:23:26.897 "name": null, 00:23:26.897 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:26.897 "is_configured": false, 00:23:26.897 "data_offset": 2048, 00:23:26.897 "data_size": 63488 00:23:26.897 }, 00:23:26.897 { 00:23:26.897 "name": null, 00:23:26.897 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:26.897 "is_configured": false, 00:23:26.897 "data_offset": 2048, 00:23:26.897 "data_size": 63488 00:23:26.897 }, 00:23:26.897 { 00:23:26.897 "name": null, 00:23:26.897 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:26.897 "is_configured": false, 00:23:26.897 "data_offset": 2048, 00:23:26.897 "data_size": 63488 00:23:26.897 } 00:23:26.897 ] 00:23:26.897 }' 00:23:26.897 05:03:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:26.897 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:27.157 05:03:50 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:23:27.157 05:03:50 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:27.157 [2024-11-18 05:03:50.675976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:27.157 [2024-11-18 05:03:50.676058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.157 [2024-11-18 05:03:50.676089] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:23:27.157 [2024-11-18 05:03:50.676102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.157 [2024-11-18 05:03:50.676698] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.157 [2024-11-18 05:03:50.676734] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:27.157 [2024-11-18 05:03:50.676849] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:27.157 [2024-11-18 05:03:50.676882] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:27.416 pt2 00:23:27.416 05:03:50 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:27.416 [2024-11-18 05:03:50.932045] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.674 05:03:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.933 05:03:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:27.933 "name": "raid_bdev1", 00:23:27.933 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:27.933 "strip_size_kb": 64, 00:23:27.933 "state": "configuring", 00:23:27.933 "raid_level": "raid5f", 00:23:27.933 "superblock": true, 00:23:27.933 "num_base_bdevs": 4, 00:23:27.933 "num_base_bdevs_discovered": 1, 00:23:27.933 "num_base_bdevs_operational": 4, 00:23:27.933 "base_bdevs_list": [ 00:23:27.933 { 00:23:27.933 "name": "pt1", 00:23:27.933 "uuid": "746f6aa4-de71-5376-9239-9ac12f31e3ec", 00:23:27.933 "is_configured": true, 00:23:27.933 "data_offset": 2048, 00:23:27.933 "data_size": 63488 00:23:27.933 }, 00:23:27.933 { 00:23:27.933 "name": null, 00:23:27.933 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:27.933 "is_configured": false, 00:23:27.933 "data_offset": 2048, 00:23:27.933 "data_size": 63488 00:23:27.933 }, 00:23:27.933 { 00:23:27.933 "name": null, 00:23:27.933 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:27.933 "is_configured": false, 00:23:27.933 "data_offset": 2048, 00:23:27.933 "data_size": 63488 00:23:27.933 }, 00:23:27.933 { 00:23:27.933 "name": null, 00:23:27.933 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:27.933 "is_configured": false, 00:23:27.933 "data_offset": 2048, 00:23:27.933 "data_size": 63488 00:23:27.933 } 00:23:27.933 ] 00:23:27.933 }' 00:23:27.933 05:03:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:27.933 05:03:51 -- common/autotest_common.sh@10 -- # set +x 00:23:28.192 05:03:51 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:28.192 05:03:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:28.192 05:03:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:28.192 [2024-11-18 05:03:51.620209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:28.192 [2024-11-18 05:03:51.620276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.192 [2024-11-18 05:03:51.620302] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:23:28.192 [2024-11-18 05:03:51.620316] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.192 [2024-11-18 05:03:51.620768] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.192 [2024-11-18 05:03:51.620810] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:28.192 [2024-11-18 05:03:51.620900] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:28.192 [2024-11-18 05:03:51.620932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:28.192 pt2 00:23:28.193 05:03:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:28.193 05:03:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:28.193 05:03:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:28.463 [2024-11-18 05:03:51.800272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:28.463 [2024-11-18 05:03:51.800353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.463 [2024-11-18 05:03:51.800379] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:23:28.463 [2024-11-18 05:03:51.800394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.463 [2024-11-18 05:03:51.800842] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.463 [2024-11-18 05:03:51.800879] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:28.463 [2024-11-18 05:03:51.800968] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:28.464 [2024-11-18 05:03:51.801005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:28.464 pt3 00:23:28.464 05:03:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:28.464 05:03:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:28.464 05:03:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:28.754 [2024-11-18 05:03:51.980309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:28.754 [2024-11-18 05:03:51.980400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.754 [2024-11-18 05:03:51.980435] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:23:28.754 [2024-11-18 05:03:51.980454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.754 [2024-11-18 05:03:51.980993] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.754 [2024-11-18 05:03:51.981038] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:28.754 [2024-11-18 05:03:51.981146] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:28.754 [2024-11-18 05:03:51.981179] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:28.754 [2024-11-18 05:03:51.981378] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:23:28.755 [2024-11-18 05:03:51.981399] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:28.755 [2024-11-18 05:03:51.981494] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:23:28.755 [2024-11-18 05:03:51.988948] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:23:28.755 [2024-11-18 05:03:51.988972] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:23:28.755 [2024-11-18 05:03:51.989181] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.755 pt4 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.755 "name": "raid_bdev1", 00:23:28.755 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:28.755 "strip_size_kb": 64, 00:23:28.755 "state": "online", 00:23:28.755 "raid_level": "raid5f", 00:23:28.755 "superblock": true, 00:23:28.755 "num_base_bdevs": 4, 00:23:28.755 "num_base_bdevs_discovered": 4, 00:23:28.755 "num_base_bdevs_operational": 4, 00:23:28.755 "base_bdevs_list": [ 00:23:28.755 { 00:23:28.755 "name": "pt1", 00:23:28.755 "uuid": "746f6aa4-de71-5376-9239-9ac12f31e3ec", 00:23:28.755 "is_configured": true, 00:23:28.755 "data_offset": 2048, 00:23:28.755 "data_size": 63488 00:23:28.755 }, 00:23:28.755 { 00:23:28.755 "name": "pt2", 00:23:28.755 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:28.755 "is_configured": true, 00:23:28.755 "data_offset": 2048, 00:23:28.755 "data_size": 63488 00:23:28.755 }, 00:23:28.755 { 00:23:28.755 "name": "pt3", 00:23:28.755 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:28.755 "is_configured": true, 00:23:28.755 "data_offset": 2048, 00:23:28.755 "data_size": 63488 00:23:28.755 }, 00:23:28.755 { 00:23:28.755 "name": "pt4", 00:23:28.755 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:28.755 "is_configured": true, 00:23:28.755 "data_offset": 2048, 00:23:28.755 "data_size": 63488 00:23:28.755 } 00:23:28.755 ] 00:23:28.755 }' 00:23:28.755 05:03:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.755 05:03:52 -- common/autotest_common.sh@10 -- # set +x 00:23:29.035 05:03:52 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:29.035 05:03:52 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:29.303 [2024-11-18 05:03:52.699540] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.303 05:03:52 -- bdev/bdev_raid.sh@430 -- # '[' 9f204e8b-bd2c-492f-84d8-cf3be67b27cd '!=' 9f204e8b-bd2c-492f-84d8-cf3be67b27cd ']' 00:23:29.303 05:03:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:29.303 05:03:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:29.303 05:03:52 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:29.303 05:03:52 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:29.562 [2024-11-18 05:03:52.887480] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.562 05:03:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.821 05:03:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.821 "name": "raid_bdev1", 00:23:29.821 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:29.821 "strip_size_kb": 64, 00:23:29.821 "state": "online", 00:23:29.821 "raid_level": "raid5f", 00:23:29.821 "superblock": true, 00:23:29.821 "num_base_bdevs": 4, 00:23:29.821 "num_base_bdevs_discovered": 3, 00:23:29.821 "num_base_bdevs_operational": 3, 00:23:29.821 "base_bdevs_list": [ 00:23:29.821 { 00:23:29.821 "name": null, 00:23:29.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.821 "is_configured": false, 00:23:29.821 "data_offset": 2048, 00:23:29.821 "data_size": 63488 00:23:29.821 }, 00:23:29.821 { 00:23:29.821 "name": "pt2", 00:23:29.821 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:29.821 "is_configured": true, 00:23:29.821 "data_offset": 2048, 00:23:29.821 "data_size": 63488 00:23:29.821 }, 00:23:29.821 { 00:23:29.821 "name": "pt3", 00:23:29.821 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:29.821 "is_configured": true, 00:23:29.821 "data_offset": 2048, 00:23:29.821 "data_size": 63488 00:23:29.821 }, 00:23:29.821 { 00:23:29.821 "name": "pt4", 00:23:29.821 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:29.821 "is_configured": true, 00:23:29.821 "data_offset": 2048, 00:23:29.821 "data_size": 63488 00:23:29.821 } 00:23:29.821 ] 00:23:29.821 }' 00:23:29.821 05:03:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.821 05:03:53 -- common/autotest_common.sh@10 -- # set +x 00:23:30.080 05:03:53 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:30.339 [2024-11-18 05:03:53.679613] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:30.339 [2024-11-18 05:03:53.679782] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:30.339 [2024-11-18 05:03:53.679871] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.339 [2024-11-18 05:03:53.679954] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.339 [2024-11-18 05:03:53.679968] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:23:30.339 05:03:53 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:30.339 05:03:53 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.598 05:03:53 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:30.598 05:03:53 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:30.598 05:03:53 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:30.598 05:03:53 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:30.598 05:03:53 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:30.598 05:03:54 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:30.598 05:03:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:30.598 05:03:54 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:30.858 05:03:54 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:30.858 05:03:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:30.858 05:03:54 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:31.117 [2024-11-18 05:03:54.595870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:31.117 [2024-11-18 05:03:54.595963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.117 [2024-11-18 05:03:54.595992] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:23:31.117 [2024-11-18 05:03:54.596005] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.117 [2024-11-18 05:03:54.598191] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.117 [2024-11-18 05:03:54.598268] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:31.117 [2024-11-18 05:03:54.598362] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:31.117 [2024-11-18 05:03:54.598411] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:31.117 pt2 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.117 05:03:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.377 05:03:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.377 "name": "raid_bdev1", 00:23:31.377 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:31.377 "strip_size_kb": 64, 00:23:31.377 "state": "configuring", 00:23:31.377 "raid_level": "raid5f", 00:23:31.377 "superblock": true, 00:23:31.377 "num_base_bdevs": 4, 00:23:31.377 "num_base_bdevs_discovered": 1, 00:23:31.377 "num_base_bdevs_operational": 3, 00:23:31.377 "base_bdevs_list": [ 00:23:31.377 { 00:23:31.377 "name": null, 00:23:31.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.377 "is_configured": false, 00:23:31.377 "data_offset": 2048, 00:23:31.377 "data_size": 63488 00:23:31.377 }, 00:23:31.377 { 00:23:31.377 "name": "pt2", 00:23:31.377 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:31.377 "is_configured": true, 00:23:31.377 "data_offset": 2048, 00:23:31.377 "data_size": 63488 00:23:31.377 }, 00:23:31.377 { 00:23:31.377 "name": null, 00:23:31.377 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:31.377 "is_configured": false, 00:23:31.377 "data_offset": 2048, 00:23:31.377 "data_size": 63488 00:23:31.377 }, 00:23:31.377 { 00:23:31.377 "name": null, 00:23:31.377 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:31.377 "is_configured": false, 00:23:31.377 "data_offset": 2048, 00:23:31.377 "data_size": 63488 00:23:31.377 } 00:23:31.377 ] 00:23:31.377 }' 00:23:31.377 05:03:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.377 05:03:54 -- common/autotest_common.sh@10 -- # set +x 00:23:31.636 05:03:55 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:31.636 05:03:55 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:31.636 05:03:55 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:31.895 [2024-11-18 05:03:55.336010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:31.895 [2024-11-18 05:03:55.336086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.895 [2024-11-18 05:03:55.336116] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:23:31.895 [2024-11-18 05:03:55.336129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.895 [2024-11-18 05:03:55.336610] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.895 [2024-11-18 05:03:55.336645] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:31.895 [2024-11-18 05:03:55.336752] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:31.895 [2024-11-18 05:03:55.336801] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:31.895 pt3 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.895 05:03:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.154 05:03:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:32.154 "name": "raid_bdev1", 00:23:32.154 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:32.154 "strip_size_kb": 64, 00:23:32.154 "state": "configuring", 00:23:32.154 "raid_level": "raid5f", 00:23:32.154 "superblock": true, 00:23:32.154 "num_base_bdevs": 4, 00:23:32.154 "num_base_bdevs_discovered": 2, 00:23:32.154 "num_base_bdevs_operational": 3, 00:23:32.154 "base_bdevs_list": [ 00:23:32.154 { 00:23:32.154 "name": null, 00:23:32.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.154 "is_configured": false, 00:23:32.154 "data_offset": 2048, 00:23:32.154 "data_size": 63488 00:23:32.154 }, 00:23:32.154 { 00:23:32.154 "name": "pt2", 00:23:32.154 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:32.154 "is_configured": true, 00:23:32.154 "data_offset": 2048, 00:23:32.154 "data_size": 63488 00:23:32.154 }, 00:23:32.154 { 00:23:32.154 "name": "pt3", 00:23:32.154 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:32.154 "is_configured": true, 00:23:32.154 "data_offset": 2048, 00:23:32.154 "data_size": 63488 00:23:32.154 }, 00:23:32.154 { 00:23:32.154 "name": null, 00:23:32.154 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:32.154 "is_configured": false, 00:23:32.154 "data_offset": 2048, 00:23:32.154 "data_size": 63488 00:23:32.154 } 00:23:32.154 ] 00:23:32.154 }' 00:23:32.154 05:03:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:32.154 05:03:55 -- common/autotest_common.sh@10 -- # set +x 00:23:32.413 05:03:55 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:32.413 05:03:55 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:32.413 05:03:55 -- bdev/bdev_raid.sh@462 -- # i=3 00:23:32.413 05:03:55 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:32.673 [2024-11-18 05:03:56.032175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:32.673 [2024-11-18 05:03:56.032267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.673 [2024-11-18 05:03:56.032303] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:23:32.673 [2024-11-18 05:03:56.032323] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.673 [2024-11-18 05:03:56.032797] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.673 [2024-11-18 05:03:56.032818] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:32.673 [2024-11-18 05:03:56.032939] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:32.673 [2024-11-18 05:03:56.032965] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:32.673 [2024-11-18 05:03:56.033097] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ba80 00:23:32.673 [2024-11-18 05:03:56.033111] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:32.673 [2024-11-18 05:03:56.033199] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:23:32.673 [2024-11-18 05:03:56.038649] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ba80 00:23:32.673 [2024-11-18 05:03:56.038694] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ba80 00:23:32.673 [2024-11-18 05:03:56.038974] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.673 pt4 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.673 05:03:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.933 05:03:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:32.933 "name": "raid_bdev1", 00:23:32.933 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:32.933 "strip_size_kb": 64, 00:23:32.933 "state": "online", 00:23:32.933 "raid_level": "raid5f", 00:23:32.933 "superblock": true, 00:23:32.933 "num_base_bdevs": 4, 00:23:32.933 "num_base_bdevs_discovered": 3, 00:23:32.933 "num_base_bdevs_operational": 3, 00:23:32.933 "base_bdevs_list": [ 00:23:32.933 { 00:23:32.933 "name": null, 00:23:32.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.933 "is_configured": false, 00:23:32.933 "data_offset": 2048, 00:23:32.933 "data_size": 63488 00:23:32.933 }, 00:23:32.933 { 00:23:32.933 "name": "pt2", 00:23:32.933 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:32.933 "is_configured": true, 00:23:32.933 "data_offset": 2048, 00:23:32.933 "data_size": 63488 00:23:32.933 }, 00:23:32.933 { 00:23:32.933 "name": "pt3", 00:23:32.933 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:32.933 "is_configured": true, 00:23:32.933 "data_offset": 2048, 00:23:32.933 "data_size": 63488 00:23:32.933 }, 00:23:32.933 { 00:23:32.933 "name": "pt4", 00:23:32.933 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:32.933 "is_configured": true, 00:23:32.933 "data_offset": 2048, 00:23:32.933 "data_size": 63488 00:23:32.933 } 00:23:32.933 ] 00:23:32.933 }' 00:23:32.933 05:03:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:32.933 05:03:56 -- common/autotest_common.sh@10 -- # set +x 00:23:33.192 05:03:56 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:23:33.192 05:03:56 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:33.451 [2024-11-18 05:03:56.808713] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:33.451 [2024-11-18 05:03:56.808746] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:33.451 [2024-11-18 05:03:56.808822] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:33.451 [2024-11-18 05:03:56.808892] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:33.451 [2024-11-18 05:03:56.808910] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state offline 00:23:33.451 05:03:56 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.451 05:03:56 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:33.710 [2024-11-18 05:03:57.172775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:33.710 [2024-11-18 05:03:57.172856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.710 [2024-11-18 05:03:57.172883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:23:33.710 [2024-11-18 05:03:57.172898] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.710 [2024-11-18 05:03:57.175089] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.710 [2024-11-18 05:03:57.175130] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:33.710 [2024-11-18 05:03:57.175260] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:33.710 [2024-11-18 05:03:57.175326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:33.710 pt1 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.710 05:03:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.969 05:03:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.969 "name": "raid_bdev1", 00:23:33.969 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:33.969 "strip_size_kb": 64, 00:23:33.969 "state": "configuring", 00:23:33.969 "raid_level": "raid5f", 00:23:33.969 "superblock": true, 00:23:33.969 "num_base_bdevs": 4, 00:23:33.969 "num_base_bdevs_discovered": 1, 00:23:33.969 "num_base_bdevs_operational": 4, 00:23:33.969 "base_bdevs_list": [ 00:23:33.969 { 00:23:33.969 "name": "pt1", 00:23:33.969 "uuid": "746f6aa4-de71-5376-9239-9ac12f31e3ec", 00:23:33.969 "is_configured": true, 00:23:33.969 "data_offset": 2048, 00:23:33.969 "data_size": 63488 00:23:33.969 }, 00:23:33.969 { 00:23:33.969 "name": null, 00:23:33.969 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:33.969 "is_configured": false, 00:23:33.969 "data_offset": 2048, 00:23:33.969 "data_size": 63488 00:23:33.969 }, 00:23:33.969 { 00:23:33.969 "name": null, 00:23:33.969 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:33.969 "is_configured": false, 00:23:33.969 "data_offset": 2048, 00:23:33.969 "data_size": 63488 00:23:33.969 }, 00:23:33.969 { 00:23:33.969 "name": null, 00:23:33.969 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:33.969 "is_configured": false, 00:23:33.969 "data_offset": 2048, 00:23:33.969 "data_size": 63488 00:23:33.969 } 00:23:33.969 ] 00:23:33.969 }' 00:23:33.969 05:03:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.969 05:03:57 -- common/autotest_common.sh@10 -- # set +x 00:23:34.227 05:03:57 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:34.227 05:03:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:34.227 05:03:57 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:34.486 05:03:57 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:34.486 05:03:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:34.486 05:03:57 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:34.745 05:03:58 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:34.745 05:03:58 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:34.745 05:03:58 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:35.004 05:03:58 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:35.004 05:03:58 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:35.004 05:03:58 -- bdev/bdev_raid.sh@489 -- # i=3 00:23:35.004 05:03:58 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:35.004 [2024-11-18 05:03:58.457056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:35.004 [2024-11-18 05:03:58.457133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.004 [2024-11-18 05:03:58.457158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:23:35.004 [2024-11-18 05:03:58.457172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.004 [2024-11-18 05:03:58.457634] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.005 [2024-11-18 05:03:58.457673] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:35.005 [2024-11-18 05:03:58.457776] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:35.005 [2024-11-18 05:03:58.457840] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:35.005 [2024-11-18 05:03:58.457853] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.005 [2024-11-18 05:03:58.457879] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c980 name raid_bdev1, state configuring 00:23:35.005 [2024-11-18 05:03:58.457947] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:35.005 pt4 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.005 05:03:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.264 05:03:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:35.264 "name": "raid_bdev1", 00:23:35.264 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:35.264 "strip_size_kb": 64, 00:23:35.264 "state": "configuring", 00:23:35.264 "raid_level": "raid5f", 00:23:35.264 "superblock": true, 00:23:35.264 "num_base_bdevs": 4, 00:23:35.264 "num_base_bdevs_discovered": 1, 00:23:35.264 "num_base_bdevs_operational": 3, 00:23:35.264 "base_bdevs_list": [ 00:23:35.264 { 00:23:35.264 "name": null, 00:23:35.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.264 "is_configured": false, 00:23:35.264 "data_offset": 2048, 00:23:35.264 "data_size": 63488 00:23:35.264 }, 00:23:35.264 { 00:23:35.264 "name": null, 00:23:35.264 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:35.264 "is_configured": false, 00:23:35.264 "data_offset": 2048, 00:23:35.264 "data_size": 63488 00:23:35.264 }, 00:23:35.264 { 00:23:35.264 "name": null, 00:23:35.264 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:35.264 "is_configured": false, 00:23:35.264 "data_offset": 2048, 00:23:35.264 "data_size": 63488 00:23:35.264 }, 00:23:35.264 { 00:23:35.264 "name": "pt4", 00:23:35.264 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:35.264 "is_configured": true, 00:23:35.264 "data_offset": 2048, 00:23:35.264 "data_size": 63488 00:23:35.264 } 00:23:35.264 ] 00:23:35.264 }' 00:23:35.264 05:03:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:35.264 05:03:58 -- common/autotest_common.sh@10 -- # set +x 00:23:35.523 05:03:58 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:35.523 05:03:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:35.523 05:03:58 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:35.784 [2024-11-18 05:03:59.197281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:35.784 [2024-11-18 05:03:59.197373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.784 [2024-11-18 05:03:59.197410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:23:35.784 [2024-11-18 05:03:59.197424] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.784 [2024-11-18 05:03:59.197938] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.784 [2024-11-18 05:03:59.197961] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:35.784 [2024-11-18 05:03:59.198055] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:35.784 [2024-11-18 05:03:59.198088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:35.784 pt2 00:23:35.784 05:03:59 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:35.784 05:03:59 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:35.784 05:03:59 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:36.046 [2024-11-18 05:03:59.445306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:36.046 [2024-11-18 05:03:59.445529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.046 [2024-11-18 05:03:59.445625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d580 00:23:36.046 [2024-11-18 05:03:59.445737] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.046 [2024-11-18 05:03:59.446252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.046 [2024-11-18 05:03:59.446396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:36.046 [2024-11-18 05:03:59.446599] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:36.046 [2024-11-18 05:03:59.446728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:36.046 [2024-11-18 05:03:59.446919] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:23:36.046 [2024-11-18 05:03:59.447024] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:36.046 [2024-11-18 05:03:59.447155] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:23:36.046 [2024-11-18 05:03:59.452629] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:23:36.046 [2024-11-18 05:03:59.452765] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:23:36.046 [2024-11-18 05:03:59.453139] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.046 pt3 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.046 05:03:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.304 05:03:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:36.304 "name": "raid_bdev1", 00:23:36.304 "uuid": "9f204e8b-bd2c-492f-84d8-cf3be67b27cd", 00:23:36.304 "strip_size_kb": 64, 00:23:36.304 "state": "online", 00:23:36.304 "raid_level": "raid5f", 00:23:36.304 "superblock": true, 00:23:36.304 "num_base_bdevs": 4, 00:23:36.304 "num_base_bdevs_discovered": 3, 00:23:36.304 "num_base_bdevs_operational": 3, 00:23:36.304 "base_bdevs_list": [ 00:23:36.304 { 00:23:36.304 "name": null, 00:23:36.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.304 "is_configured": false, 00:23:36.304 "data_offset": 2048, 00:23:36.304 "data_size": 63488 00:23:36.304 }, 00:23:36.304 { 00:23:36.304 "name": "pt2", 00:23:36.304 "uuid": "34384481-497d-5c00-a72d-c1d73dbe8ddb", 00:23:36.304 "is_configured": true, 00:23:36.304 "data_offset": 2048, 00:23:36.304 "data_size": 63488 00:23:36.304 }, 00:23:36.304 { 00:23:36.304 "name": "pt3", 00:23:36.304 "uuid": "dbfc56a9-ba87-52d7-a08e-47db3866eb9b", 00:23:36.304 "is_configured": true, 00:23:36.304 "data_offset": 2048, 00:23:36.304 "data_size": 63488 00:23:36.304 }, 00:23:36.304 { 00:23:36.304 "name": "pt4", 00:23:36.304 "uuid": "b60cb81e-4602-54fa-aa22-29586f32db10", 00:23:36.304 "is_configured": true, 00:23:36.304 "data_offset": 2048, 00:23:36.304 "data_size": 63488 00:23:36.304 } 00:23:36.304 ] 00:23:36.304 }' 00:23:36.304 05:03:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:36.304 05:03:59 -- common/autotest_common.sh@10 -- # set +x 00:23:36.563 05:03:59 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:36.563 05:03:59 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:36.821 [2024-11-18 05:04:00.139054] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:36.821 05:04:00 -- bdev/bdev_raid.sh@506 -- # '[' 9f204e8b-bd2c-492f-84d8-cf3be67b27cd '!=' 9f204e8b-bd2c-492f-84d8-cf3be67b27cd ']' 00:23:36.821 05:04:00 -- bdev/bdev_raid.sh@511 -- # killprocess 85382 00:23:36.821 05:04:00 -- common/autotest_common.sh@936 -- # '[' -z 85382 ']' 00:23:36.821 05:04:00 -- common/autotest_common.sh@940 -- # kill -0 85382 00:23:36.821 05:04:00 -- common/autotest_common.sh@941 -- # uname 00:23:36.821 05:04:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.821 05:04:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85382 00:23:36.821 killing process with pid 85382 00:23:36.821 05:04:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:36.821 05:04:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:36.821 05:04:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85382' 00:23:36.821 05:04:00 -- common/autotest_common.sh@955 -- # kill 85382 00:23:36.821 [2024-11-18 05:04:00.192826] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:36.821 [2024-11-18 05:04:00.192900] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.821 05:04:00 -- common/autotest_common.sh@960 -- # wait 85382 00:23:36.821 [2024-11-18 05:04:00.192972] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:36.821 [2024-11-18 05:04:00.193003] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:23:37.079 [2024-11-18 05:04:00.460565] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:38.018 00:23:38.018 real 0m16.829s 00:23:38.018 user 0m29.182s 00:23:38.018 sys 0m2.508s 00:23:38.018 ************************************ 00:23:38.018 END TEST raid5f_superblock_test 00:23:38.018 ************************************ 00:23:38.018 05:04:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:38.018 05:04:01 -- common/autotest_common.sh@10 -- # set +x 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:23:38.018 05:04:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:38.018 05:04:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:38.018 05:04:01 -- common/autotest_common.sh@10 -- # set +x 00:23:38.018 ************************************ 00:23:38.018 START TEST raid5f_rebuild_test 00:23:38.018 ************************************ 00:23:38.018 05:04:01 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 false false 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=85971 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 85971 /var/tmp/spdk-raid.sock 00:23:38.018 05:04:01 -- common/autotest_common.sh@829 -- # '[' -z 85971 ']' 00:23:38.018 05:04:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:38.018 05:04:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.018 05:04:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:38.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:38.018 05:04:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:38.018 05:04:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.018 05:04:01 -- common/autotest_common.sh@10 -- # set +x 00:23:38.018 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:38.018 Zero copy mechanism will not be used. 00:23:38.018 [2024-11-18 05:04:01.489681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:38.018 [2024-11-18 05:04:01.489859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85971 ] 00:23:38.277 [2024-11-18 05:04:01.659785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.537 [2024-11-18 05:04:01.813288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.537 [2024-11-18 05:04:01.953173] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:39.105 05:04:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.105 05:04:02 -- common/autotest_common.sh@862 -- # return 0 00:23:39.105 05:04:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:39.105 05:04:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:39.105 05:04:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:39.105 BaseBdev1 00:23:39.105 05:04:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:39.105 05:04:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:39.105 05:04:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:39.364 BaseBdev2 00:23:39.364 05:04:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:39.364 05:04:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:39.364 05:04:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:39.623 BaseBdev3 00:23:39.624 05:04:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:39.624 05:04:03 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:39.624 05:04:03 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:39.883 BaseBdev4 00:23:39.883 05:04:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:40.141 spare_malloc 00:23:40.142 05:04:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:40.142 spare_delay 00:23:40.142 05:04:03 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:40.401 [2024-11-18 05:04:03.824433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:40.401 [2024-11-18 05:04:03.824704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.401 [2024-11-18 05:04:03.824748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:23:40.401 [2024-11-18 05:04:03.824767] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.401 [2024-11-18 05:04:03.827215] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.401 [2024-11-18 05:04:03.827264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:40.401 spare 00:23:40.401 05:04:03 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:40.660 [2024-11-18 05:04:04.012575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:40.660 [2024-11-18 05:04:04.014627] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:40.660 [2024-11-18 05:04:04.014677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:40.660 [2024-11-18 05:04:04.014725] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:40.660 [2024-11-18 05:04:04.014805] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:23:40.660 [2024-11-18 05:04:04.014820] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:40.660 [2024-11-18 05:04:04.014936] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:23:40.660 [2024-11-18 05:04:04.020816] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:23:40.660 [2024-11-18 05:04:04.020838] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:23:40.660 [2024-11-18 05:04:04.021061] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.660 05:04:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.919 05:04:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.919 "name": "raid_bdev1", 00:23:40.919 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:40.919 "strip_size_kb": 64, 00:23:40.919 "state": "online", 00:23:40.919 "raid_level": "raid5f", 00:23:40.919 "superblock": false, 00:23:40.919 "num_base_bdevs": 4, 00:23:40.919 "num_base_bdevs_discovered": 4, 00:23:40.919 "num_base_bdevs_operational": 4, 00:23:40.919 "base_bdevs_list": [ 00:23:40.919 { 00:23:40.919 "name": "BaseBdev1", 00:23:40.919 "uuid": "50d7929b-77cf-4111-b4bd-3424b9b0ba93", 00:23:40.919 "is_configured": true, 00:23:40.919 "data_offset": 0, 00:23:40.919 "data_size": 65536 00:23:40.919 }, 00:23:40.919 { 00:23:40.919 "name": "BaseBdev2", 00:23:40.919 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:40.919 "is_configured": true, 00:23:40.919 "data_offset": 0, 00:23:40.919 "data_size": 65536 00:23:40.919 }, 00:23:40.919 { 00:23:40.919 "name": "BaseBdev3", 00:23:40.919 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:40.919 "is_configured": true, 00:23:40.919 "data_offset": 0, 00:23:40.919 "data_size": 65536 00:23:40.919 }, 00:23:40.919 { 00:23:40.919 "name": "BaseBdev4", 00:23:40.919 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:40.919 "is_configured": true, 00:23:40.919 "data_offset": 0, 00:23:40.919 "data_size": 65536 00:23:40.919 } 00:23:40.919 ] 00:23:40.919 }' 00:23:40.919 05:04:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.919 05:04:04 -- common/autotest_common.sh@10 -- # set +x 00:23:41.178 05:04:04 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:41.178 05:04:04 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:41.437 [2024-11-18 05:04:04.719115] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:41.437 05:04:04 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:23:41.437 05:04:04 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.437 05:04:04 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:41.696 05:04:04 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:41.697 05:04:04 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:41.697 05:04:04 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:41.697 05:04:04 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:41.697 05:04:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:41.697 05:04:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:41.697 05:04:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:41.697 05:04:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:41.697 05:04:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:41.697 05:04:04 -- bdev/nbd_common.sh@12 -- # local i 00:23:41.697 05:04:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:41.697 05:04:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:41.697 05:04:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:41.697 [2024-11-18 05:04:05.159118] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:23:41.697 /dev/nbd0 00:23:41.697 05:04:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:41.697 05:04:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:41.697 05:04:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:41.697 05:04:05 -- common/autotest_common.sh@867 -- # local i 00:23:41.697 05:04:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:41.697 05:04:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:41.697 05:04:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:41.697 05:04:05 -- common/autotest_common.sh@871 -- # break 00:23:41.697 05:04:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:41.697 05:04:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:41.697 05:04:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:41.697 1+0 records in 00:23:41.697 1+0 records out 00:23:41.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288504 s, 14.2 MB/s 00:23:41.697 05:04:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:41.956 05:04:05 -- common/autotest_common.sh@884 -- # size=4096 00:23:41.956 05:04:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:41.956 05:04:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:41.956 05:04:05 -- common/autotest_common.sh@887 -- # return 0 00:23:41.956 05:04:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:41.956 05:04:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:41.956 05:04:05 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:41.956 05:04:05 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:23:41.956 05:04:05 -- bdev/bdev_raid.sh@582 -- # echo 192 00:23:41.956 05:04:05 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:42.216 512+0 records in 00:23:42.216 512+0 records out 00:23:42.216 100663296 bytes (101 MB, 96 MiB) copied, 0.48239 s, 209 MB/s 00:23:42.216 05:04:05 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:42.216 05:04:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:42.216 05:04:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:42.216 05:04:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:42.216 05:04:05 -- bdev/nbd_common.sh@51 -- # local i 00:23:42.216 05:04:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:42.216 05:04:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:42.475 05:04:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:42.475 05:04:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:42.475 05:04:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:42.475 05:04:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:42.475 05:04:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:42.475 05:04:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:42.475 [2024-11-18 05:04:05.907765] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:42.475 05:04:05 -- bdev/nbd_common.sh@41 -- # break 00:23:42.475 05:04:05 -- bdev/nbd_common.sh@45 -- # return 0 00:23:42.475 05:04:05 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:42.734 [2024-11-18 05:04:06.075197] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.734 05:04:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.993 05:04:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:42.993 "name": "raid_bdev1", 00:23:42.993 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:42.993 "strip_size_kb": 64, 00:23:42.993 "state": "online", 00:23:42.993 "raid_level": "raid5f", 00:23:42.993 "superblock": false, 00:23:42.993 "num_base_bdevs": 4, 00:23:42.993 "num_base_bdevs_discovered": 3, 00:23:42.993 "num_base_bdevs_operational": 3, 00:23:42.993 "base_bdevs_list": [ 00:23:42.993 { 00:23:42.993 "name": null, 00:23:42.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.993 "is_configured": false, 00:23:42.993 "data_offset": 0, 00:23:42.993 "data_size": 65536 00:23:42.993 }, 00:23:42.993 { 00:23:42.993 "name": "BaseBdev2", 00:23:42.993 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:42.993 "is_configured": true, 00:23:42.993 "data_offset": 0, 00:23:42.993 "data_size": 65536 00:23:42.993 }, 00:23:42.993 { 00:23:42.993 "name": "BaseBdev3", 00:23:42.993 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:42.993 "is_configured": true, 00:23:42.993 "data_offset": 0, 00:23:42.993 "data_size": 65536 00:23:42.993 }, 00:23:42.993 { 00:23:42.993 "name": "BaseBdev4", 00:23:42.993 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:42.993 "is_configured": true, 00:23:42.993 "data_offset": 0, 00:23:42.993 "data_size": 65536 00:23:42.993 } 00:23:42.993 ] 00:23:42.993 }' 00:23:42.994 05:04:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:42.994 05:04:06 -- common/autotest_common.sh@10 -- # set +x 00:23:43.253 05:04:06 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:43.512 [2024-11-18 05:04:06.851407] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:43.512 [2024-11-18 05:04:06.851452] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:43.512 [2024-11-18 05:04:06.861625] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b000 00:23:43.512 [2024-11-18 05:04:06.868490] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:43.512 05:04:06 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:44.456 05:04:07 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.456 05:04:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:44.456 05:04:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:44.456 05:04:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:44.456 05:04:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:44.456 05:04:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.456 05:04:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.715 05:04:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:44.715 "name": "raid_bdev1", 00:23:44.715 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:44.715 "strip_size_kb": 64, 00:23:44.715 "state": "online", 00:23:44.715 "raid_level": "raid5f", 00:23:44.715 "superblock": false, 00:23:44.715 "num_base_bdevs": 4, 00:23:44.715 "num_base_bdevs_discovered": 4, 00:23:44.715 "num_base_bdevs_operational": 4, 00:23:44.715 "process": { 00:23:44.715 "type": "rebuild", 00:23:44.715 "target": "spare", 00:23:44.715 "progress": { 00:23:44.715 "blocks": 23040, 00:23:44.715 "percent": 11 00:23:44.715 } 00:23:44.715 }, 00:23:44.715 "base_bdevs_list": [ 00:23:44.715 { 00:23:44.715 "name": "spare", 00:23:44.715 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:44.715 "is_configured": true, 00:23:44.715 "data_offset": 0, 00:23:44.715 "data_size": 65536 00:23:44.715 }, 00:23:44.715 { 00:23:44.715 "name": "BaseBdev2", 00:23:44.715 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:44.715 "is_configured": true, 00:23:44.715 "data_offset": 0, 00:23:44.715 "data_size": 65536 00:23:44.715 }, 00:23:44.715 { 00:23:44.715 "name": "BaseBdev3", 00:23:44.715 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:44.715 "is_configured": true, 00:23:44.715 "data_offset": 0, 00:23:44.715 "data_size": 65536 00:23:44.715 }, 00:23:44.715 { 00:23:44.715 "name": "BaseBdev4", 00:23:44.715 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:44.715 "is_configured": true, 00:23:44.715 "data_offset": 0, 00:23:44.715 "data_size": 65536 00:23:44.715 } 00:23:44.715 ] 00:23:44.715 }' 00:23:44.715 05:04:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:44.715 05:04:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.715 05:04:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:44.715 05:04:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.715 05:04:08 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:44.974 [2024-11-18 05:04:08.361579] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:44.974 [2024-11-18 05:04:08.377429] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:44.974 [2024-11-18 05:04:08.377504] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.974 05:04:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.234 05:04:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.234 "name": "raid_bdev1", 00:23:45.234 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:45.234 "strip_size_kb": 64, 00:23:45.234 "state": "online", 00:23:45.234 "raid_level": "raid5f", 00:23:45.234 "superblock": false, 00:23:45.234 "num_base_bdevs": 4, 00:23:45.234 "num_base_bdevs_discovered": 3, 00:23:45.234 "num_base_bdevs_operational": 3, 00:23:45.234 "base_bdevs_list": [ 00:23:45.234 { 00:23:45.234 "name": null, 00:23:45.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.234 "is_configured": false, 00:23:45.234 "data_offset": 0, 00:23:45.234 "data_size": 65536 00:23:45.234 }, 00:23:45.234 { 00:23:45.234 "name": "BaseBdev2", 00:23:45.234 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:45.234 "is_configured": true, 00:23:45.234 "data_offset": 0, 00:23:45.234 "data_size": 65536 00:23:45.234 }, 00:23:45.234 { 00:23:45.234 "name": "BaseBdev3", 00:23:45.234 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:45.234 "is_configured": true, 00:23:45.234 "data_offset": 0, 00:23:45.234 "data_size": 65536 00:23:45.234 }, 00:23:45.234 { 00:23:45.234 "name": "BaseBdev4", 00:23:45.234 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:45.234 "is_configured": true, 00:23:45.234 "data_offset": 0, 00:23:45.234 "data_size": 65536 00:23:45.234 } 00:23:45.234 ] 00:23:45.234 }' 00:23:45.234 05:04:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.234 05:04:08 -- common/autotest_common.sh@10 -- # set +x 00:23:45.494 05:04:08 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:45.494 05:04:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:45.494 05:04:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:45.494 05:04:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:45.494 05:04:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:45.494 05:04:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.494 05:04:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.753 05:04:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:45.753 "name": "raid_bdev1", 00:23:45.753 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:45.753 "strip_size_kb": 64, 00:23:45.753 "state": "online", 00:23:45.753 "raid_level": "raid5f", 00:23:45.753 "superblock": false, 00:23:45.753 "num_base_bdevs": 4, 00:23:45.753 "num_base_bdevs_discovered": 3, 00:23:45.753 "num_base_bdevs_operational": 3, 00:23:45.753 "base_bdevs_list": [ 00:23:45.753 { 00:23:45.753 "name": null, 00:23:45.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.753 "is_configured": false, 00:23:45.753 "data_offset": 0, 00:23:45.753 "data_size": 65536 00:23:45.753 }, 00:23:45.753 { 00:23:45.753 "name": "BaseBdev2", 00:23:45.753 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:45.753 "is_configured": true, 00:23:45.753 "data_offset": 0, 00:23:45.753 "data_size": 65536 00:23:45.753 }, 00:23:45.753 { 00:23:45.753 "name": "BaseBdev3", 00:23:45.753 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:45.753 "is_configured": true, 00:23:45.753 "data_offset": 0, 00:23:45.753 "data_size": 65536 00:23:45.753 }, 00:23:45.753 { 00:23:45.753 "name": "BaseBdev4", 00:23:45.753 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:45.753 "is_configured": true, 00:23:45.753 "data_offset": 0, 00:23:45.753 "data_size": 65536 00:23:45.753 } 00:23:45.753 ] 00:23:45.753 }' 00:23:45.753 05:04:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:45.753 05:04:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:45.753 05:04:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:45.753 05:04:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:45.753 05:04:09 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:46.013 [2024-11-18 05:04:09.344680] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:46.013 [2024-11-18 05:04:09.344720] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:46.013 [2024-11-18 05:04:09.354110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b0d0 00:23:46.013 [2024-11-18 05:04:09.360735] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:46.013 05:04:09 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:46.948 05:04:10 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.948 05:04:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:46.948 05:04:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:46.948 05:04:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:46.948 05:04:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:46.948 05:04:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.948 05:04:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:47.208 "name": "raid_bdev1", 00:23:47.208 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:47.208 "strip_size_kb": 64, 00:23:47.208 "state": "online", 00:23:47.208 "raid_level": "raid5f", 00:23:47.208 "superblock": false, 00:23:47.208 "num_base_bdevs": 4, 00:23:47.208 "num_base_bdevs_discovered": 4, 00:23:47.208 "num_base_bdevs_operational": 4, 00:23:47.208 "process": { 00:23:47.208 "type": "rebuild", 00:23:47.208 "target": "spare", 00:23:47.208 "progress": { 00:23:47.208 "blocks": 23040, 00:23:47.208 "percent": 11 00:23:47.208 } 00:23:47.208 }, 00:23:47.208 "base_bdevs_list": [ 00:23:47.208 { 00:23:47.208 "name": "spare", 00:23:47.208 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:47.208 "is_configured": true, 00:23:47.208 "data_offset": 0, 00:23:47.208 "data_size": 65536 00:23:47.208 }, 00:23:47.208 { 00:23:47.208 "name": "BaseBdev2", 00:23:47.208 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:47.208 "is_configured": true, 00:23:47.208 "data_offset": 0, 00:23:47.208 "data_size": 65536 00:23:47.208 }, 00:23:47.208 { 00:23:47.208 "name": "BaseBdev3", 00:23:47.208 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:47.208 "is_configured": true, 00:23:47.208 "data_offset": 0, 00:23:47.208 "data_size": 65536 00:23:47.208 }, 00:23:47.208 { 00:23:47.208 "name": "BaseBdev4", 00:23:47.208 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:47.208 "is_configured": true, 00:23:47.208 "data_offset": 0, 00:23:47.208 "data_size": 65536 00:23:47.208 } 00:23:47.208 ] 00:23:47.208 }' 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@657 -- # local timeout=624 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.208 05:04:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.468 05:04:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:47.468 "name": "raid_bdev1", 00:23:47.468 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:47.468 "strip_size_kb": 64, 00:23:47.468 "state": "online", 00:23:47.468 "raid_level": "raid5f", 00:23:47.468 "superblock": false, 00:23:47.468 "num_base_bdevs": 4, 00:23:47.468 "num_base_bdevs_discovered": 4, 00:23:47.468 "num_base_bdevs_operational": 4, 00:23:47.468 "process": { 00:23:47.468 "type": "rebuild", 00:23:47.468 "target": "spare", 00:23:47.468 "progress": { 00:23:47.468 "blocks": 26880, 00:23:47.468 "percent": 13 00:23:47.468 } 00:23:47.468 }, 00:23:47.468 "base_bdevs_list": [ 00:23:47.468 { 00:23:47.468 "name": "spare", 00:23:47.468 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:47.468 "is_configured": true, 00:23:47.468 "data_offset": 0, 00:23:47.468 "data_size": 65536 00:23:47.468 }, 00:23:47.468 { 00:23:47.468 "name": "BaseBdev2", 00:23:47.468 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:47.468 "is_configured": true, 00:23:47.468 "data_offset": 0, 00:23:47.468 "data_size": 65536 00:23:47.468 }, 00:23:47.468 { 00:23:47.468 "name": "BaseBdev3", 00:23:47.468 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:47.468 "is_configured": true, 00:23:47.468 "data_offset": 0, 00:23:47.468 "data_size": 65536 00:23:47.468 }, 00:23:47.468 { 00:23:47.468 "name": "BaseBdev4", 00:23:47.468 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:47.468 "is_configured": true, 00:23:47.468 "data_offset": 0, 00:23:47.468 "data_size": 65536 00:23:47.468 } 00:23:47.468 ] 00:23:47.468 }' 00:23:47.468 05:04:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:47.468 05:04:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.468 05:04:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:47.468 05:04:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.468 05:04:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:48.406 05:04:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:48.406 05:04:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.406 05:04:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:48.406 05:04:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:48.406 05:04:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:48.406 05:04:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:48.406 05:04:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.406 05:04:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.665 05:04:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:48.665 "name": "raid_bdev1", 00:23:48.665 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:48.665 "strip_size_kb": 64, 00:23:48.665 "state": "online", 00:23:48.665 "raid_level": "raid5f", 00:23:48.665 "superblock": false, 00:23:48.665 "num_base_bdevs": 4, 00:23:48.665 "num_base_bdevs_discovered": 4, 00:23:48.665 "num_base_bdevs_operational": 4, 00:23:48.665 "process": { 00:23:48.665 "type": "rebuild", 00:23:48.665 "target": "spare", 00:23:48.665 "progress": { 00:23:48.665 "blocks": 49920, 00:23:48.665 "percent": 25 00:23:48.665 } 00:23:48.665 }, 00:23:48.665 "base_bdevs_list": [ 00:23:48.665 { 00:23:48.665 "name": "spare", 00:23:48.665 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:48.665 "is_configured": true, 00:23:48.665 "data_offset": 0, 00:23:48.665 "data_size": 65536 00:23:48.665 }, 00:23:48.665 { 00:23:48.665 "name": "BaseBdev2", 00:23:48.665 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:48.665 "is_configured": true, 00:23:48.665 "data_offset": 0, 00:23:48.665 "data_size": 65536 00:23:48.665 }, 00:23:48.665 { 00:23:48.665 "name": "BaseBdev3", 00:23:48.665 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:48.665 "is_configured": true, 00:23:48.665 "data_offset": 0, 00:23:48.665 "data_size": 65536 00:23:48.665 }, 00:23:48.665 { 00:23:48.665 "name": "BaseBdev4", 00:23:48.665 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:48.665 "is_configured": true, 00:23:48.665 "data_offset": 0, 00:23:48.665 "data_size": 65536 00:23:48.665 } 00:23:48.665 ] 00:23:48.665 }' 00:23:48.665 05:04:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:48.665 05:04:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.665 05:04:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:48.665 05:04:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.665 05:04:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:49.603 05:04:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:49.603 05:04:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.603 05:04:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:49.603 05:04:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:49.603 05:04:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:49.603 05:04:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:49.603 05:04:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.603 05:04:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.862 05:04:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:49.862 "name": "raid_bdev1", 00:23:49.862 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:49.862 "strip_size_kb": 64, 00:23:49.862 "state": "online", 00:23:49.862 "raid_level": "raid5f", 00:23:49.862 "superblock": false, 00:23:49.862 "num_base_bdevs": 4, 00:23:49.862 "num_base_bdevs_discovered": 4, 00:23:49.862 "num_base_bdevs_operational": 4, 00:23:49.862 "process": { 00:23:49.862 "type": "rebuild", 00:23:49.862 "target": "spare", 00:23:49.862 "progress": { 00:23:49.862 "blocks": 74880, 00:23:49.862 "percent": 38 00:23:49.862 } 00:23:49.862 }, 00:23:49.862 "base_bdevs_list": [ 00:23:49.862 { 00:23:49.862 "name": "spare", 00:23:49.862 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:49.862 "is_configured": true, 00:23:49.862 "data_offset": 0, 00:23:49.862 "data_size": 65536 00:23:49.862 }, 00:23:49.862 { 00:23:49.862 "name": "BaseBdev2", 00:23:49.862 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:49.862 "is_configured": true, 00:23:49.862 "data_offset": 0, 00:23:49.862 "data_size": 65536 00:23:49.862 }, 00:23:49.862 { 00:23:49.862 "name": "BaseBdev3", 00:23:49.862 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:49.862 "is_configured": true, 00:23:49.862 "data_offset": 0, 00:23:49.862 "data_size": 65536 00:23:49.862 }, 00:23:49.862 { 00:23:49.862 "name": "BaseBdev4", 00:23:49.862 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:49.862 "is_configured": true, 00:23:49.862 "data_offset": 0, 00:23:49.862 "data_size": 65536 00:23:49.862 } 00:23:49.862 ] 00:23:49.862 }' 00:23:49.862 05:04:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:49.862 05:04:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.862 05:04:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:49.862 05:04:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.862 05:04:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:51.240 05:04:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:51.240 05:04:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.240 05:04:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:51.240 05:04:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:51.240 05:04:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:51.241 05:04:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:51.241 05:04:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.241 05:04:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.241 05:04:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:51.241 "name": "raid_bdev1", 00:23:51.241 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:51.241 "strip_size_kb": 64, 00:23:51.241 "state": "online", 00:23:51.241 "raid_level": "raid5f", 00:23:51.241 "superblock": false, 00:23:51.241 "num_base_bdevs": 4, 00:23:51.241 "num_base_bdevs_discovered": 4, 00:23:51.241 "num_base_bdevs_operational": 4, 00:23:51.241 "process": { 00:23:51.241 "type": "rebuild", 00:23:51.241 "target": "spare", 00:23:51.241 "progress": { 00:23:51.241 "blocks": 97920, 00:23:51.241 "percent": 49 00:23:51.241 } 00:23:51.241 }, 00:23:51.241 "base_bdevs_list": [ 00:23:51.241 { 00:23:51.241 "name": "spare", 00:23:51.241 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:51.241 "is_configured": true, 00:23:51.241 "data_offset": 0, 00:23:51.241 "data_size": 65536 00:23:51.241 }, 00:23:51.241 { 00:23:51.241 "name": "BaseBdev2", 00:23:51.241 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:51.241 "is_configured": true, 00:23:51.241 "data_offset": 0, 00:23:51.241 "data_size": 65536 00:23:51.241 }, 00:23:51.241 { 00:23:51.241 "name": "BaseBdev3", 00:23:51.241 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:51.241 "is_configured": true, 00:23:51.241 "data_offset": 0, 00:23:51.241 "data_size": 65536 00:23:51.241 }, 00:23:51.241 { 00:23:51.241 "name": "BaseBdev4", 00:23:51.241 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:51.241 "is_configured": true, 00:23:51.241 "data_offset": 0, 00:23:51.241 "data_size": 65536 00:23:51.241 } 00:23:51.241 ] 00:23:51.241 }' 00:23:51.241 05:04:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:51.241 05:04:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.241 05:04:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:51.241 05:04:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.241 05:04:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:52.179 05:04:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:52.179 05:04:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.179 05:04:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:52.179 05:04:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:52.179 05:04:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:52.179 05:04:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:52.179 05:04:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.179 05:04:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.438 05:04:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:52.438 "name": "raid_bdev1", 00:23:52.438 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:52.438 "strip_size_kb": 64, 00:23:52.438 "state": "online", 00:23:52.438 "raid_level": "raid5f", 00:23:52.438 "superblock": false, 00:23:52.438 "num_base_bdevs": 4, 00:23:52.438 "num_base_bdevs_discovered": 4, 00:23:52.438 "num_base_bdevs_operational": 4, 00:23:52.438 "process": { 00:23:52.438 "type": "rebuild", 00:23:52.438 "target": "spare", 00:23:52.438 "progress": { 00:23:52.438 "blocks": 122880, 00:23:52.438 "percent": 62 00:23:52.438 } 00:23:52.438 }, 00:23:52.438 "base_bdevs_list": [ 00:23:52.438 { 00:23:52.438 "name": "spare", 00:23:52.438 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:52.438 "is_configured": true, 00:23:52.438 "data_offset": 0, 00:23:52.438 "data_size": 65536 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "name": "BaseBdev2", 00:23:52.438 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:52.438 "is_configured": true, 00:23:52.438 "data_offset": 0, 00:23:52.438 "data_size": 65536 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "name": "BaseBdev3", 00:23:52.438 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:52.438 "is_configured": true, 00:23:52.438 "data_offset": 0, 00:23:52.438 "data_size": 65536 00:23:52.438 }, 00:23:52.438 { 00:23:52.438 "name": "BaseBdev4", 00:23:52.438 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:52.438 "is_configured": true, 00:23:52.438 "data_offset": 0, 00:23:52.438 "data_size": 65536 00:23:52.438 } 00:23:52.438 ] 00:23:52.438 }' 00:23:52.438 05:04:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:52.438 05:04:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:52.439 05:04:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:52.439 05:04:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.439 05:04:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:53.376 05:04:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:53.376 05:04:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.376 05:04:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:53.376 05:04:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:53.376 05:04:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:53.376 05:04:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:53.376 05:04:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.376 05:04:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.636 05:04:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:53.636 "name": "raid_bdev1", 00:23:53.636 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:53.636 "strip_size_kb": 64, 00:23:53.636 "state": "online", 00:23:53.636 "raid_level": "raid5f", 00:23:53.636 "superblock": false, 00:23:53.636 "num_base_bdevs": 4, 00:23:53.636 "num_base_bdevs_discovered": 4, 00:23:53.636 "num_base_bdevs_operational": 4, 00:23:53.636 "process": { 00:23:53.636 "type": "rebuild", 00:23:53.636 "target": "spare", 00:23:53.636 "progress": { 00:23:53.636 "blocks": 147840, 00:23:53.636 "percent": 75 00:23:53.636 } 00:23:53.636 }, 00:23:53.636 "base_bdevs_list": [ 00:23:53.636 { 00:23:53.636 "name": "spare", 00:23:53.636 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:53.636 "is_configured": true, 00:23:53.636 "data_offset": 0, 00:23:53.636 "data_size": 65536 00:23:53.636 }, 00:23:53.636 { 00:23:53.636 "name": "BaseBdev2", 00:23:53.636 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:53.636 "is_configured": true, 00:23:53.636 "data_offset": 0, 00:23:53.636 "data_size": 65536 00:23:53.636 }, 00:23:53.636 { 00:23:53.636 "name": "BaseBdev3", 00:23:53.636 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:53.636 "is_configured": true, 00:23:53.636 "data_offset": 0, 00:23:53.636 "data_size": 65536 00:23:53.636 }, 00:23:53.636 { 00:23:53.636 "name": "BaseBdev4", 00:23:53.636 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:53.636 "is_configured": true, 00:23:53.636 "data_offset": 0, 00:23:53.636 "data_size": 65536 00:23:53.636 } 00:23:53.636 ] 00:23:53.636 }' 00:23:53.636 05:04:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:53.636 05:04:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.636 05:04:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:53.636 05:04:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.636 05:04:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:55.015 "name": "raid_bdev1", 00:23:55.015 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:55.015 "strip_size_kb": 64, 00:23:55.015 "state": "online", 00:23:55.015 "raid_level": "raid5f", 00:23:55.015 "superblock": false, 00:23:55.015 "num_base_bdevs": 4, 00:23:55.015 "num_base_bdevs_discovered": 4, 00:23:55.015 "num_base_bdevs_operational": 4, 00:23:55.015 "process": { 00:23:55.015 "type": "rebuild", 00:23:55.015 "target": "spare", 00:23:55.015 "progress": { 00:23:55.015 "blocks": 170880, 00:23:55.015 "percent": 86 00:23:55.015 } 00:23:55.015 }, 00:23:55.015 "base_bdevs_list": [ 00:23:55.015 { 00:23:55.015 "name": "spare", 00:23:55.015 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:55.015 "is_configured": true, 00:23:55.015 "data_offset": 0, 00:23:55.015 "data_size": 65536 00:23:55.015 }, 00:23:55.015 { 00:23:55.015 "name": "BaseBdev2", 00:23:55.015 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:55.015 "is_configured": true, 00:23:55.015 "data_offset": 0, 00:23:55.015 "data_size": 65536 00:23:55.015 }, 00:23:55.015 { 00:23:55.015 "name": "BaseBdev3", 00:23:55.015 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:55.015 "is_configured": true, 00:23:55.015 "data_offset": 0, 00:23:55.015 "data_size": 65536 00:23:55.015 }, 00:23:55.015 { 00:23:55.015 "name": "BaseBdev4", 00:23:55.015 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:55.015 "is_configured": true, 00:23:55.015 "data_offset": 0, 00:23:55.015 "data_size": 65536 00:23:55.015 } 00:23:55.015 ] 00:23:55.015 }' 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.015 05:04:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:55.963 05:04:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:55.963 05:04:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.963 05:04:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:55.963 05:04:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:55.963 05:04:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:55.963 05:04:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:55.963 05:04:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.963 05:04:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.245 05:04:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:56.245 "name": "raid_bdev1", 00:23:56.245 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:56.245 "strip_size_kb": 64, 00:23:56.245 "state": "online", 00:23:56.245 "raid_level": "raid5f", 00:23:56.245 "superblock": false, 00:23:56.245 "num_base_bdevs": 4, 00:23:56.245 "num_base_bdevs_discovered": 4, 00:23:56.245 "num_base_bdevs_operational": 4, 00:23:56.245 "process": { 00:23:56.245 "type": "rebuild", 00:23:56.245 "target": "spare", 00:23:56.245 "progress": { 00:23:56.245 "blocks": 195840, 00:23:56.245 "percent": 99 00:23:56.245 } 00:23:56.245 }, 00:23:56.245 "base_bdevs_list": [ 00:23:56.245 { 00:23:56.245 "name": "spare", 00:23:56.245 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:56.245 "is_configured": true, 00:23:56.245 "data_offset": 0, 00:23:56.245 "data_size": 65536 00:23:56.245 }, 00:23:56.245 { 00:23:56.245 "name": "BaseBdev2", 00:23:56.245 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:56.245 "is_configured": true, 00:23:56.245 "data_offset": 0, 00:23:56.245 "data_size": 65536 00:23:56.245 }, 00:23:56.245 { 00:23:56.245 "name": "BaseBdev3", 00:23:56.245 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:56.245 "is_configured": true, 00:23:56.245 "data_offset": 0, 00:23:56.245 "data_size": 65536 00:23:56.245 }, 00:23:56.245 { 00:23:56.245 "name": "BaseBdev4", 00:23:56.245 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:56.245 "is_configured": true, 00:23:56.245 "data_offset": 0, 00:23:56.245 "data_size": 65536 00:23:56.245 } 00:23:56.245 ] 00:23:56.245 }' 00:23:56.245 05:04:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:56.245 05:04:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:56.245 05:04:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:56.245 05:04:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:56.245 05:04:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:56.245 [2024-11-18 05:04:19.721231] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:56.245 [2024-11-18 05:04:19.721430] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:56.245 [2024-11-18 05:04:19.721517] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.194 05:04:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:57.194 05:04:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.194 05:04:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.194 05:04:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:57.194 05:04:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:57.194 05:04:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.194 05:04:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.194 05:04:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.451 05:04:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.451 "name": "raid_bdev1", 00:23:57.451 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:57.451 "strip_size_kb": 64, 00:23:57.451 "state": "online", 00:23:57.451 "raid_level": "raid5f", 00:23:57.451 "superblock": false, 00:23:57.451 "num_base_bdevs": 4, 00:23:57.451 "num_base_bdevs_discovered": 4, 00:23:57.451 "num_base_bdevs_operational": 4, 00:23:57.451 "base_bdevs_list": [ 00:23:57.451 { 00:23:57.451 "name": "spare", 00:23:57.451 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:57.451 "is_configured": true, 00:23:57.451 "data_offset": 0, 00:23:57.451 "data_size": 65536 00:23:57.451 }, 00:23:57.451 { 00:23:57.451 "name": "BaseBdev2", 00:23:57.451 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:57.451 "is_configured": true, 00:23:57.451 "data_offset": 0, 00:23:57.451 "data_size": 65536 00:23:57.451 }, 00:23:57.451 { 00:23:57.451 "name": "BaseBdev3", 00:23:57.451 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:57.451 "is_configured": true, 00:23:57.451 "data_offset": 0, 00:23:57.451 "data_size": 65536 00:23:57.451 }, 00:23:57.451 { 00:23:57.451 "name": "BaseBdev4", 00:23:57.451 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:57.451 "is_configured": true, 00:23:57.451 "data_offset": 0, 00:23:57.451 "data_size": 65536 00:23:57.451 } 00:23:57.452 ] 00:23:57.452 }' 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@660 -- # break 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.452 05:04:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.710 "name": "raid_bdev1", 00:23:57.710 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:57.710 "strip_size_kb": 64, 00:23:57.710 "state": "online", 00:23:57.710 "raid_level": "raid5f", 00:23:57.710 "superblock": false, 00:23:57.710 "num_base_bdevs": 4, 00:23:57.710 "num_base_bdevs_discovered": 4, 00:23:57.710 "num_base_bdevs_operational": 4, 00:23:57.710 "base_bdevs_list": [ 00:23:57.710 { 00:23:57.710 "name": "spare", 00:23:57.710 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:57.710 "is_configured": true, 00:23:57.710 "data_offset": 0, 00:23:57.710 "data_size": 65536 00:23:57.710 }, 00:23:57.710 { 00:23:57.710 "name": "BaseBdev2", 00:23:57.710 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:57.710 "is_configured": true, 00:23:57.710 "data_offset": 0, 00:23:57.710 "data_size": 65536 00:23:57.710 }, 00:23:57.710 { 00:23:57.710 "name": "BaseBdev3", 00:23:57.710 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:57.710 "is_configured": true, 00:23:57.710 "data_offset": 0, 00:23:57.710 "data_size": 65536 00:23:57.710 }, 00:23:57.710 { 00:23:57.710 "name": "BaseBdev4", 00:23:57.710 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:57.710 "is_configured": true, 00:23:57.710 "data_offset": 0, 00:23:57.710 "data_size": 65536 00:23:57.710 } 00:23:57.710 ] 00:23:57.710 }' 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.710 05:04:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.996 05:04:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.996 "name": "raid_bdev1", 00:23:57.996 "uuid": "7d77162d-f752-4154-9ef6-df8dc338135c", 00:23:57.996 "strip_size_kb": 64, 00:23:57.996 "state": "online", 00:23:57.996 "raid_level": "raid5f", 00:23:57.996 "superblock": false, 00:23:57.996 "num_base_bdevs": 4, 00:23:57.996 "num_base_bdevs_discovered": 4, 00:23:57.996 "num_base_bdevs_operational": 4, 00:23:57.996 "base_bdevs_list": [ 00:23:57.996 { 00:23:57.996 "name": "spare", 00:23:57.996 "uuid": "63dd6031-9f74-511b-8611-1cdb2e581705", 00:23:57.996 "is_configured": true, 00:23:57.996 "data_offset": 0, 00:23:57.996 "data_size": 65536 00:23:57.996 }, 00:23:57.996 { 00:23:57.996 "name": "BaseBdev2", 00:23:57.996 "uuid": "bb873c7b-c811-4820-be87-a4baf33df07d", 00:23:57.996 "is_configured": true, 00:23:57.996 "data_offset": 0, 00:23:57.996 "data_size": 65536 00:23:57.996 }, 00:23:57.996 { 00:23:57.996 "name": "BaseBdev3", 00:23:57.996 "uuid": "e4866d8f-538e-47e9-b876-b4f8cf3c1df7", 00:23:57.996 "is_configured": true, 00:23:57.996 "data_offset": 0, 00:23:57.996 "data_size": 65536 00:23:57.996 }, 00:23:57.996 { 00:23:57.996 "name": "BaseBdev4", 00:23:57.996 "uuid": "be1e0fdd-b5c4-4d94-82a2-3c25178c803a", 00:23:57.996 "is_configured": true, 00:23:57.996 "data_offset": 0, 00:23:57.996 "data_size": 65536 00:23:57.996 } 00:23:57.996 ] 00:23:57.996 }' 00:23:57.996 05:04:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.996 05:04:21 -- common/autotest_common.sh@10 -- # set +x 00:23:58.255 05:04:21 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:58.513 [2024-11-18 05:04:21.905274] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:58.513 [2024-11-18 05:04:21.905454] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:58.513 [2024-11-18 05:04:21.905552] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:58.513 [2024-11-18 05:04:21.905651] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:58.513 [2024-11-18 05:04:21.905666] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:23:58.513 05:04:21 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:58.513 05:04:21 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.771 05:04:22 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:58.771 05:04:22 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:58.771 05:04:22 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:58.771 05:04:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:58.771 05:04:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:58.771 05:04:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:58.771 05:04:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:58.771 05:04:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:58.771 05:04:22 -- bdev/nbd_common.sh@12 -- # local i 00:23:58.771 05:04:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:58.771 05:04:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:58.771 05:04:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:59.030 /dev/nbd0 00:23:59.030 05:04:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:59.030 05:04:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:59.030 05:04:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:59.030 05:04:22 -- common/autotest_common.sh@867 -- # local i 00:23:59.030 05:04:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:59.030 05:04:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:59.030 05:04:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:59.030 05:04:22 -- common/autotest_common.sh@871 -- # break 00:23:59.030 05:04:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:59.030 05:04:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:59.030 05:04:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:59.030 1+0 records in 00:23:59.030 1+0 records out 00:23:59.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00274059 s, 1.5 MB/s 00:23:59.030 05:04:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.030 05:04:22 -- common/autotest_common.sh@884 -- # size=4096 00:23:59.030 05:04:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.030 05:04:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:59.030 05:04:22 -- common/autotest_common.sh@887 -- # return 0 00:23:59.030 05:04:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.030 05:04:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:59.030 05:04:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:59.289 /dev/nbd1 00:23:59.289 05:04:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:59.289 05:04:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:59.289 05:04:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:59.289 05:04:22 -- common/autotest_common.sh@867 -- # local i 00:23:59.289 05:04:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:59.289 05:04:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:59.289 05:04:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:59.289 05:04:22 -- common/autotest_common.sh@871 -- # break 00:23:59.289 05:04:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:59.289 05:04:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:59.289 05:04:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:59.289 1+0 records in 00:23:59.289 1+0 records out 00:23:59.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276252 s, 14.8 MB/s 00:23:59.289 05:04:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.289 05:04:22 -- common/autotest_common.sh@884 -- # size=4096 00:23:59.289 05:04:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.289 05:04:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:59.289 05:04:22 -- common/autotest_common.sh@887 -- # return 0 00:23:59.289 05:04:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.289 05:04:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:59.289 05:04:22 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:59.289 05:04:22 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:59.289 05:04:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:59.289 05:04:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:59.289 05:04:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:59.289 05:04:22 -- bdev/nbd_common.sh@51 -- # local i 00:23:59.289 05:04:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:59.289 05:04:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:59.548 05:04:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:59.548 05:04:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:59.548 05:04:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:59.548 05:04:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:59.548 05:04:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:59.548 05:04:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:59.548 05:04:23 -- bdev/nbd_common.sh@41 -- # break 00:23:59.548 05:04:23 -- bdev/nbd_common.sh@45 -- # return 0 00:23:59.548 05:04:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:59.548 05:04:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:59.807 05:04:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:59.807 05:04:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:59.807 05:04:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:59.807 05:04:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:59.807 05:04:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:59.807 05:04:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:59.807 05:04:23 -- bdev/nbd_common.sh@41 -- # break 00:23:59.807 05:04:23 -- bdev/nbd_common.sh@45 -- # return 0 00:23:59.807 05:04:23 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:59.807 05:04:23 -- bdev/bdev_raid.sh@709 -- # killprocess 85971 00:23:59.807 05:04:23 -- common/autotest_common.sh@936 -- # '[' -z 85971 ']' 00:23:59.807 05:04:23 -- common/autotest_common.sh@940 -- # kill -0 85971 00:23:59.807 05:04:23 -- common/autotest_common.sh@941 -- # uname 00:23:59.807 05:04:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:59.807 05:04:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85971 00:24:00.067 killing process with pid 85971 00:24:00.067 Received shutdown signal, test time was about 60.000000 seconds 00:24:00.067 00:24:00.067 Latency(us) 00:24:00.067 [2024-11-18T05:04:23.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.067 [2024-11-18T05:04:23.591Z] =================================================================================================================== 00:24:00.067 [2024-11-18T05:04:23.591Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:00.067 05:04:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:00.067 05:04:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:00.067 05:04:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85971' 00:24:00.067 05:04:23 -- common/autotest_common.sh@955 -- # kill 85971 00:24:00.067 05:04:23 -- common/autotest_common.sh@960 -- # wait 85971 00:24:00.067 [2024-11-18 05:04:23.347503] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:00.325 [2024-11-18 05:04:23.659751] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:01.264 ************************************ 00:24:01.264 END TEST raid5f_rebuild_test 00:24:01.264 ************************************ 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:01.264 00:24:01.264 real 0m23.138s 00:24:01.264 user 0m31.134s 00:24:01.264 sys 0m2.685s 00:24:01.264 05:04:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:01.264 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:24:01.264 05:04:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:01.264 05:04:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:01.264 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:24:01.264 ************************************ 00:24:01.264 START TEST raid5f_rebuild_test_sb 00:24:01.264 ************************************ 00:24:01.264 05:04:24 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 true false 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@544 -- # raid_pid=86535 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:01.264 05:04:24 -- bdev/bdev_raid.sh@545 -- # waitforlisten 86535 /var/tmp/spdk-raid.sock 00:24:01.264 05:04:24 -- common/autotest_common.sh@829 -- # '[' -z 86535 ']' 00:24:01.264 05:04:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:01.264 05:04:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.264 05:04:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:01.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:01.264 05:04:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.264 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:24:01.264 [2024-11-18 05:04:24.692029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:01.264 [2024-11-18 05:04:24.692495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86535 ] 00:24:01.264 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:01.264 Zero copy mechanism will not be used. 00:24:01.524 [2024-11-18 05:04:24.861321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.524 [2024-11-18 05:04:25.014477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.783 [2024-11-18 05:04:25.161808] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:02.041 05:04:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.041 05:04:25 -- common/autotest_common.sh@862 -- # return 0 00:24:02.041 05:04:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:02.041 05:04:25 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:02.041 05:04:25 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:02.300 BaseBdev1_malloc 00:24:02.300 05:04:25 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:02.560 [2024-11-18 05:04:25.924371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:02.560 [2024-11-18 05:04:25.924456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.560 [2024-11-18 05:04:25.924492] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:24:02.560 [2024-11-18 05:04:25.924507] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.560 [2024-11-18 05:04:25.926638] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.560 [2024-11-18 05:04:25.926682] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:02.560 BaseBdev1 00:24:02.560 05:04:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:02.560 05:04:25 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:02.560 05:04:25 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:02.818 BaseBdev2_malloc 00:24:02.818 05:04:26 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:02.818 [2024-11-18 05:04:26.317223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:02.818 [2024-11-18 05:04:26.317511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.818 [2024-11-18 05:04:26.317560] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:24:02.818 [2024-11-18 05:04:26.317588] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.818 [2024-11-18 05:04:26.320093] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.818 [2024-11-18 05:04:26.320152] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:02.818 BaseBdev2 00:24:02.819 05:04:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:02.819 05:04:26 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:02.819 05:04:26 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:03.077 BaseBdev3_malloc 00:24:03.077 05:04:26 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:03.336 [2024-11-18 05:04:26.705599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:03.336 [2024-11-18 05:04:26.705833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.336 [2024-11-18 05:04:26.705906] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:24:03.336 [2024-11-18 05:04:26.706022] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.336 [2024-11-18 05:04:26.708125] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.336 [2024-11-18 05:04:26.708343] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:03.336 BaseBdev3 00:24:03.336 05:04:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:03.336 05:04:26 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:03.336 05:04:26 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:03.596 BaseBdev4_malloc 00:24:03.596 05:04:26 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:03.596 [2024-11-18 05:04:27.105820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:03.596 [2024-11-18 05:04:27.106038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.596 [2024-11-18 05:04:27.106080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:24:03.596 [2024-11-18 05:04:27.106097] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.596 [2024-11-18 05:04:27.108527] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.596 [2024-11-18 05:04:27.108570] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:03.596 BaseBdev4 00:24:03.855 05:04:27 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:03.855 spare_malloc 00:24:03.855 05:04:27 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:04.113 spare_delay 00:24:04.113 05:04:27 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:04.372 [2024-11-18 05:04:27.669877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:04.372 [2024-11-18 05:04:27.670140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.372 [2024-11-18 05:04:27.670197] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:24:04.372 [2024-11-18 05:04:27.670214] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.372 [2024-11-18 05:04:27.672454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.372 [2024-11-18 05:04:27.672498] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:04.372 spare 00:24:04.372 05:04:27 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:04.632 [2024-11-18 05:04:27.901989] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:04.632 [2024-11-18 05:04:27.903871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:04.632 [2024-11-18 05:04:27.904073] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:04.632 [2024-11-18 05:04:27.904180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:04.632 [2024-11-18 05:04:27.904524] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:24:04.632 [2024-11-18 05:04:27.904594] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:04.632 [2024-11-18 05:04:27.904804] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:24:04.632 [2024-11-18 05:04:27.910411] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:24:04.632 [2024-11-18 05:04:27.910560] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:24:04.632 [2024-11-18 05:04:27.910892] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.632 05:04:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.891 05:04:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:04.891 "name": "raid_bdev1", 00:24:04.891 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:04.891 "strip_size_kb": 64, 00:24:04.891 "state": "online", 00:24:04.891 "raid_level": "raid5f", 00:24:04.891 "superblock": true, 00:24:04.891 "num_base_bdevs": 4, 00:24:04.891 "num_base_bdevs_discovered": 4, 00:24:04.891 "num_base_bdevs_operational": 4, 00:24:04.891 "base_bdevs_list": [ 00:24:04.891 { 00:24:04.891 "name": "BaseBdev1", 00:24:04.891 "uuid": "d3d0da9a-6ade-5f5f-bd1f-e07b05004bd7", 00:24:04.891 "is_configured": true, 00:24:04.891 "data_offset": 2048, 00:24:04.891 "data_size": 63488 00:24:04.891 }, 00:24:04.891 { 00:24:04.891 "name": "BaseBdev2", 00:24:04.891 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:04.891 "is_configured": true, 00:24:04.891 "data_offset": 2048, 00:24:04.891 "data_size": 63488 00:24:04.891 }, 00:24:04.891 { 00:24:04.891 "name": "BaseBdev3", 00:24:04.891 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:04.891 "is_configured": true, 00:24:04.891 "data_offset": 2048, 00:24:04.891 "data_size": 63488 00:24:04.891 }, 00:24:04.891 { 00:24:04.891 "name": "BaseBdev4", 00:24:04.891 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:04.891 "is_configured": true, 00:24:04.891 "data_offset": 2048, 00:24:04.891 "data_size": 63488 00:24:04.891 } 00:24:04.891 ] 00:24:04.891 }' 00:24:04.891 05:04:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:04.891 05:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:05.150 05:04:28 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:05.150 05:04:28 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:05.150 [2024-11-18 05:04:28.600799] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:05.150 05:04:28 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:24:05.151 05:04:28 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.151 05:04:28 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:05.410 05:04:28 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:05.410 05:04:28 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:05.410 05:04:28 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:05.410 05:04:28 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:05.410 05:04:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:05.410 05:04:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:05.410 05:04:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:05.410 05:04:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:05.410 05:04:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:05.410 05:04:28 -- bdev/nbd_common.sh@12 -- # local i 00:24:05.410 05:04:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:05.410 05:04:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:05.410 05:04:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:05.669 [2024-11-18 05:04:29.024879] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:24:05.669 /dev/nbd0 00:24:05.669 05:04:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:05.669 05:04:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:05.669 05:04:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:05.669 05:04:29 -- common/autotest_common.sh@867 -- # local i 00:24:05.669 05:04:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:05.669 05:04:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:05.669 05:04:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:05.669 05:04:29 -- common/autotest_common.sh@871 -- # break 00:24:05.669 05:04:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:05.669 05:04:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:05.669 05:04:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:05.669 1+0 records in 00:24:05.669 1+0 records out 00:24:05.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026114 s, 15.7 MB/s 00:24:05.669 05:04:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:05.669 05:04:29 -- common/autotest_common.sh@884 -- # size=4096 00:24:05.669 05:04:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:05.669 05:04:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:05.669 05:04:29 -- common/autotest_common.sh@887 -- # return 0 00:24:05.669 05:04:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:05.669 05:04:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:05.669 05:04:29 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:05.669 05:04:29 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:24:05.669 05:04:29 -- bdev/bdev_raid.sh@582 -- # echo 192 00:24:05.669 05:04:29 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:24:06.235 496+0 records in 00:24:06.235 496+0 records out 00:24:06.235 97517568 bytes (98 MB, 93 MiB) copied, 0.46912 s, 208 MB/s 00:24:06.235 05:04:29 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:06.235 05:04:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:06.235 05:04:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:06.235 05:04:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:06.235 05:04:29 -- bdev/nbd_common.sh@51 -- # local i 00:24:06.235 05:04:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:06.235 05:04:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:06.494 05:04:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:06.494 05:04:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:06.494 05:04:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:06.494 05:04:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:06.494 05:04:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:06.494 05:04:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:06.494 [2024-11-18 05:04:29.818331] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.494 05:04:29 -- bdev/nbd_common.sh@41 -- # break 00:24:06.494 05:04:29 -- bdev/nbd_common.sh@45 -- # return 0 00:24:06.494 05:04:29 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:06.753 [2024-11-18 05:04:30.081874] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.753 05:04:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.012 05:04:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:07.012 "name": "raid_bdev1", 00:24:07.012 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:07.012 "strip_size_kb": 64, 00:24:07.012 "state": "online", 00:24:07.012 "raid_level": "raid5f", 00:24:07.012 "superblock": true, 00:24:07.012 "num_base_bdevs": 4, 00:24:07.012 "num_base_bdevs_discovered": 3, 00:24:07.012 "num_base_bdevs_operational": 3, 00:24:07.012 "base_bdevs_list": [ 00:24:07.012 { 00:24:07.012 "name": null, 00:24:07.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.012 "is_configured": false, 00:24:07.012 "data_offset": 2048, 00:24:07.012 "data_size": 63488 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "name": "BaseBdev2", 00:24:07.012 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:07.012 "is_configured": true, 00:24:07.012 "data_offset": 2048, 00:24:07.012 "data_size": 63488 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "name": "BaseBdev3", 00:24:07.012 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:07.012 "is_configured": true, 00:24:07.012 "data_offset": 2048, 00:24:07.012 "data_size": 63488 00:24:07.012 }, 00:24:07.012 { 00:24:07.012 "name": "BaseBdev4", 00:24:07.012 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:07.012 "is_configured": true, 00:24:07.012 "data_offset": 2048, 00:24:07.012 "data_size": 63488 00:24:07.012 } 00:24:07.012 ] 00:24:07.012 }' 00:24:07.012 05:04:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:07.012 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.271 05:04:30 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:07.530 [2024-11-18 05:04:30.826067] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:07.530 [2024-11-18 05:04:30.826326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.530 [2024-11-18 05:04:30.836763] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a300 00:24:07.530 [2024-11-18 05:04:30.847449] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:07.530 05:04:30 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:08.468 05:04:31 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.468 05:04:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.468 05:04:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.468 05:04:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.468 05:04:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.468 05:04:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.468 05:04:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.727 05:04:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.727 "name": "raid_bdev1", 00:24:08.727 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:08.727 "strip_size_kb": 64, 00:24:08.727 "state": "online", 00:24:08.727 "raid_level": "raid5f", 00:24:08.727 "superblock": true, 00:24:08.727 "num_base_bdevs": 4, 00:24:08.727 "num_base_bdevs_discovered": 4, 00:24:08.727 "num_base_bdevs_operational": 4, 00:24:08.727 "process": { 00:24:08.727 "type": "rebuild", 00:24:08.727 "target": "spare", 00:24:08.727 "progress": { 00:24:08.727 "blocks": 23040, 00:24:08.727 "percent": 12 00:24:08.727 } 00:24:08.727 }, 00:24:08.727 "base_bdevs_list": [ 00:24:08.727 { 00:24:08.727 "name": "spare", 00:24:08.727 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:08.727 "is_configured": true, 00:24:08.727 "data_offset": 2048, 00:24:08.727 "data_size": 63488 00:24:08.727 }, 00:24:08.727 { 00:24:08.727 "name": "BaseBdev2", 00:24:08.727 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:08.727 "is_configured": true, 00:24:08.727 "data_offset": 2048, 00:24:08.727 "data_size": 63488 00:24:08.727 }, 00:24:08.727 { 00:24:08.727 "name": "BaseBdev3", 00:24:08.727 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:08.727 "is_configured": true, 00:24:08.727 "data_offset": 2048, 00:24:08.727 "data_size": 63488 00:24:08.727 }, 00:24:08.727 { 00:24:08.727 "name": "BaseBdev4", 00:24:08.727 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:08.727 "is_configured": true, 00:24:08.727 "data_offset": 2048, 00:24:08.727 "data_size": 63488 00:24:08.727 } 00:24:08.728 ] 00:24:08.728 }' 00:24:08.728 05:04:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.728 05:04:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.728 05:04:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.728 05:04:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.728 05:04:32 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:08.986 [2024-11-18 05:04:32.352760] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:08.986 [2024-11-18 05:04:32.357449] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:08.986 [2024-11-18 05:04:32.357514] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:08.986 05:04:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.987 05:04:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.245 05:04:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:09.245 "name": "raid_bdev1", 00:24:09.245 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:09.245 "strip_size_kb": 64, 00:24:09.245 "state": "online", 00:24:09.245 "raid_level": "raid5f", 00:24:09.245 "superblock": true, 00:24:09.245 "num_base_bdevs": 4, 00:24:09.245 "num_base_bdevs_discovered": 3, 00:24:09.246 "num_base_bdevs_operational": 3, 00:24:09.246 "base_bdevs_list": [ 00:24:09.246 { 00:24:09.246 "name": null, 00:24:09.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.246 "is_configured": false, 00:24:09.246 "data_offset": 2048, 00:24:09.246 "data_size": 63488 00:24:09.246 }, 00:24:09.246 { 00:24:09.246 "name": "BaseBdev2", 00:24:09.246 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:09.246 "is_configured": true, 00:24:09.246 "data_offset": 2048, 00:24:09.246 "data_size": 63488 00:24:09.246 }, 00:24:09.246 { 00:24:09.246 "name": "BaseBdev3", 00:24:09.246 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:09.246 "is_configured": true, 00:24:09.246 "data_offset": 2048, 00:24:09.246 "data_size": 63488 00:24:09.246 }, 00:24:09.246 { 00:24:09.246 "name": "BaseBdev4", 00:24:09.246 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:09.246 "is_configured": true, 00:24:09.246 "data_offset": 2048, 00:24:09.246 "data_size": 63488 00:24:09.246 } 00:24:09.246 ] 00:24:09.246 }' 00:24:09.246 05:04:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:09.246 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:24:09.505 05:04:32 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:09.505 05:04:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:09.505 05:04:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:09.505 05:04:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:09.505 05:04:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:09.505 05:04:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.505 05:04:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.764 05:04:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:09.764 "name": "raid_bdev1", 00:24:09.764 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:09.764 "strip_size_kb": 64, 00:24:09.764 "state": "online", 00:24:09.764 "raid_level": "raid5f", 00:24:09.764 "superblock": true, 00:24:09.764 "num_base_bdevs": 4, 00:24:09.764 "num_base_bdevs_discovered": 3, 00:24:09.764 "num_base_bdevs_operational": 3, 00:24:09.764 "base_bdevs_list": [ 00:24:09.764 { 00:24:09.764 "name": null, 00:24:09.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.764 "is_configured": false, 00:24:09.764 "data_offset": 2048, 00:24:09.764 "data_size": 63488 00:24:09.764 }, 00:24:09.764 { 00:24:09.764 "name": "BaseBdev2", 00:24:09.764 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:09.764 "is_configured": true, 00:24:09.764 "data_offset": 2048, 00:24:09.764 "data_size": 63488 00:24:09.764 }, 00:24:09.764 { 00:24:09.764 "name": "BaseBdev3", 00:24:09.764 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:09.764 "is_configured": true, 00:24:09.764 "data_offset": 2048, 00:24:09.764 "data_size": 63488 00:24:09.764 }, 00:24:09.764 { 00:24:09.764 "name": "BaseBdev4", 00:24:09.764 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:09.764 "is_configured": true, 00:24:09.764 "data_offset": 2048, 00:24:09.764 "data_size": 63488 00:24:09.764 } 00:24:09.764 ] 00:24:09.764 }' 00:24:09.764 05:04:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:09.764 05:04:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:09.764 05:04:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:09.764 05:04:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:09.764 05:04:33 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:10.023 [2024-11-18 05:04:33.394649] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:10.023 [2024-11-18 05:04:33.394689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:10.023 [2024-11-18 05:04:33.404057] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a3d0 00:24:10.023 [2024-11-18 05:04:33.410795] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:10.023 05:04:33 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:10.960 05:04:34 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.960 05:04:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:10.960 05:04:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:10.960 05:04:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:10.960 05:04:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:10.960 05:04:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.960 05:04:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.219 05:04:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:11.219 "name": "raid_bdev1", 00:24:11.219 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:11.219 "strip_size_kb": 64, 00:24:11.219 "state": "online", 00:24:11.219 "raid_level": "raid5f", 00:24:11.219 "superblock": true, 00:24:11.219 "num_base_bdevs": 4, 00:24:11.219 "num_base_bdevs_discovered": 4, 00:24:11.219 "num_base_bdevs_operational": 4, 00:24:11.219 "process": { 00:24:11.219 "type": "rebuild", 00:24:11.219 "target": "spare", 00:24:11.219 "progress": { 00:24:11.219 "blocks": 23040, 00:24:11.219 "percent": 12 00:24:11.219 } 00:24:11.219 }, 00:24:11.219 "base_bdevs_list": [ 00:24:11.219 { 00:24:11.219 "name": "spare", 00:24:11.219 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:11.219 "is_configured": true, 00:24:11.219 "data_offset": 2048, 00:24:11.219 "data_size": 63488 00:24:11.219 }, 00:24:11.219 { 00:24:11.219 "name": "BaseBdev2", 00:24:11.219 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:11.219 "is_configured": true, 00:24:11.219 "data_offset": 2048, 00:24:11.219 "data_size": 63488 00:24:11.219 }, 00:24:11.219 { 00:24:11.219 "name": "BaseBdev3", 00:24:11.219 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:11.219 "is_configured": true, 00:24:11.219 "data_offset": 2048, 00:24:11.219 "data_size": 63488 00:24:11.219 }, 00:24:11.219 { 00:24:11.219 "name": "BaseBdev4", 00:24:11.219 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:11.219 "is_configured": true, 00:24:11.219 "data_offset": 2048, 00:24:11.219 "data_size": 63488 00:24:11.219 } 00:24:11.219 ] 00:24:11.219 }' 00:24:11.219 05:04:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:11.219 05:04:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:11.220 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@657 -- # local timeout=648 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.220 05:04:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.479 05:04:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:11.479 "name": "raid_bdev1", 00:24:11.479 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:11.479 "strip_size_kb": 64, 00:24:11.479 "state": "online", 00:24:11.479 "raid_level": "raid5f", 00:24:11.479 "superblock": true, 00:24:11.479 "num_base_bdevs": 4, 00:24:11.479 "num_base_bdevs_discovered": 4, 00:24:11.479 "num_base_bdevs_operational": 4, 00:24:11.479 "process": { 00:24:11.479 "type": "rebuild", 00:24:11.479 "target": "spare", 00:24:11.479 "progress": { 00:24:11.479 "blocks": 26880, 00:24:11.479 "percent": 14 00:24:11.479 } 00:24:11.479 }, 00:24:11.479 "base_bdevs_list": [ 00:24:11.479 { 00:24:11.479 "name": "spare", 00:24:11.479 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:11.479 "is_configured": true, 00:24:11.479 "data_offset": 2048, 00:24:11.479 "data_size": 63488 00:24:11.479 }, 00:24:11.479 { 00:24:11.479 "name": "BaseBdev2", 00:24:11.479 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:11.479 "is_configured": true, 00:24:11.479 "data_offset": 2048, 00:24:11.479 "data_size": 63488 00:24:11.479 }, 00:24:11.479 { 00:24:11.479 "name": "BaseBdev3", 00:24:11.479 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:11.479 "is_configured": true, 00:24:11.479 "data_offset": 2048, 00:24:11.479 "data_size": 63488 00:24:11.479 }, 00:24:11.479 { 00:24:11.479 "name": "BaseBdev4", 00:24:11.479 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:11.479 "is_configured": true, 00:24:11.479 "data_offset": 2048, 00:24:11.479 "data_size": 63488 00:24:11.479 } 00:24:11.479 ] 00:24:11.479 }' 00:24:11.479 05:04:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:11.479 05:04:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:11.479 05:04:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:11.479 05:04:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.479 05:04:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:12.416 05:04:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:12.416 05:04:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.416 05:04:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:12.416 05:04:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:12.416 05:04:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:12.416 05:04:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:12.416 05:04:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.416 05:04:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.676 05:04:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:12.676 "name": "raid_bdev1", 00:24:12.676 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:12.676 "strip_size_kb": 64, 00:24:12.676 "state": "online", 00:24:12.676 "raid_level": "raid5f", 00:24:12.676 "superblock": true, 00:24:12.676 "num_base_bdevs": 4, 00:24:12.676 "num_base_bdevs_discovered": 4, 00:24:12.676 "num_base_bdevs_operational": 4, 00:24:12.676 "process": { 00:24:12.676 "type": "rebuild", 00:24:12.676 "target": "spare", 00:24:12.676 "progress": { 00:24:12.676 "blocks": 49920, 00:24:12.676 "percent": 26 00:24:12.676 } 00:24:12.676 }, 00:24:12.676 "base_bdevs_list": [ 00:24:12.676 { 00:24:12.676 "name": "spare", 00:24:12.676 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:12.676 "is_configured": true, 00:24:12.676 "data_offset": 2048, 00:24:12.676 "data_size": 63488 00:24:12.676 }, 00:24:12.676 { 00:24:12.676 "name": "BaseBdev2", 00:24:12.676 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:12.676 "is_configured": true, 00:24:12.676 "data_offset": 2048, 00:24:12.676 "data_size": 63488 00:24:12.676 }, 00:24:12.676 { 00:24:12.676 "name": "BaseBdev3", 00:24:12.676 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:12.676 "is_configured": true, 00:24:12.676 "data_offset": 2048, 00:24:12.676 "data_size": 63488 00:24:12.676 }, 00:24:12.676 { 00:24:12.676 "name": "BaseBdev4", 00:24:12.676 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:12.676 "is_configured": true, 00:24:12.676 "data_offset": 2048, 00:24:12.676 "data_size": 63488 00:24:12.676 } 00:24:12.676 ] 00:24:12.676 }' 00:24:12.676 05:04:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:12.676 05:04:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:12.676 05:04:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:12.676 05:04:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:12.676 05:04:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:14.060 05:04:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:14.060 05:04:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.060 05:04:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:14.061 "name": "raid_bdev1", 00:24:14.061 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:14.061 "strip_size_kb": 64, 00:24:14.061 "state": "online", 00:24:14.061 "raid_level": "raid5f", 00:24:14.061 "superblock": true, 00:24:14.061 "num_base_bdevs": 4, 00:24:14.061 "num_base_bdevs_discovered": 4, 00:24:14.061 "num_base_bdevs_operational": 4, 00:24:14.061 "process": { 00:24:14.061 "type": "rebuild", 00:24:14.061 "target": "spare", 00:24:14.061 "progress": { 00:24:14.061 "blocks": 74880, 00:24:14.061 "percent": 39 00:24:14.061 } 00:24:14.061 }, 00:24:14.061 "base_bdevs_list": [ 00:24:14.061 { 00:24:14.061 "name": "spare", 00:24:14.061 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:14.061 "is_configured": true, 00:24:14.061 "data_offset": 2048, 00:24:14.061 "data_size": 63488 00:24:14.061 }, 00:24:14.061 { 00:24:14.061 "name": "BaseBdev2", 00:24:14.061 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:14.061 "is_configured": true, 00:24:14.061 "data_offset": 2048, 00:24:14.061 "data_size": 63488 00:24:14.061 }, 00:24:14.061 { 00:24:14.061 "name": "BaseBdev3", 00:24:14.061 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:14.061 "is_configured": true, 00:24:14.061 "data_offset": 2048, 00:24:14.061 "data_size": 63488 00:24:14.061 }, 00:24:14.061 { 00:24:14.061 "name": "BaseBdev4", 00:24:14.061 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:14.061 "is_configured": true, 00:24:14.061 "data_offset": 2048, 00:24:14.061 "data_size": 63488 00:24:14.061 } 00:24:14.061 ] 00:24:14.061 }' 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.061 05:04:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:14.997 05:04:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:14.997 05:04:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.997 05:04:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:14.997 05:04:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:14.997 05:04:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:14.997 05:04:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:14.997 05:04:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.997 05:04:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.254 05:04:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:15.254 "name": "raid_bdev1", 00:24:15.254 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:15.254 "strip_size_kb": 64, 00:24:15.254 "state": "online", 00:24:15.254 "raid_level": "raid5f", 00:24:15.254 "superblock": true, 00:24:15.254 "num_base_bdevs": 4, 00:24:15.254 "num_base_bdevs_discovered": 4, 00:24:15.254 "num_base_bdevs_operational": 4, 00:24:15.254 "process": { 00:24:15.254 "type": "rebuild", 00:24:15.254 "target": "spare", 00:24:15.254 "progress": { 00:24:15.254 "blocks": 97920, 00:24:15.254 "percent": 51 00:24:15.254 } 00:24:15.254 }, 00:24:15.254 "base_bdevs_list": [ 00:24:15.254 { 00:24:15.254 "name": "spare", 00:24:15.254 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:15.254 "is_configured": true, 00:24:15.254 "data_offset": 2048, 00:24:15.254 "data_size": 63488 00:24:15.254 }, 00:24:15.254 { 00:24:15.254 "name": "BaseBdev2", 00:24:15.254 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:15.254 "is_configured": true, 00:24:15.254 "data_offset": 2048, 00:24:15.254 "data_size": 63488 00:24:15.254 }, 00:24:15.254 { 00:24:15.254 "name": "BaseBdev3", 00:24:15.254 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:15.254 "is_configured": true, 00:24:15.254 "data_offset": 2048, 00:24:15.254 "data_size": 63488 00:24:15.254 }, 00:24:15.254 { 00:24:15.254 "name": "BaseBdev4", 00:24:15.254 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:15.254 "is_configured": true, 00:24:15.254 "data_offset": 2048, 00:24:15.254 "data_size": 63488 00:24:15.254 } 00:24:15.254 ] 00:24:15.254 }' 00:24:15.254 05:04:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:15.254 05:04:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.254 05:04:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:15.254 05:04:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.254 05:04:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:16.192 05:04:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:16.192 05:04:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.192 05:04:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:16.192 05:04:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:16.192 05:04:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:16.192 05:04:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:16.192 05:04:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.192 05:04:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.452 05:04:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:16.452 "name": "raid_bdev1", 00:24:16.452 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:16.452 "strip_size_kb": 64, 00:24:16.452 "state": "online", 00:24:16.452 "raid_level": "raid5f", 00:24:16.452 "superblock": true, 00:24:16.452 "num_base_bdevs": 4, 00:24:16.452 "num_base_bdevs_discovered": 4, 00:24:16.452 "num_base_bdevs_operational": 4, 00:24:16.452 "process": { 00:24:16.452 "type": "rebuild", 00:24:16.452 "target": "spare", 00:24:16.452 "progress": { 00:24:16.452 "blocks": 122880, 00:24:16.452 "percent": 64 00:24:16.452 } 00:24:16.452 }, 00:24:16.452 "base_bdevs_list": [ 00:24:16.452 { 00:24:16.452 "name": "spare", 00:24:16.452 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:16.452 "is_configured": true, 00:24:16.452 "data_offset": 2048, 00:24:16.452 "data_size": 63488 00:24:16.452 }, 00:24:16.452 { 00:24:16.452 "name": "BaseBdev2", 00:24:16.452 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:16.452 "is_configured": true, 00:24:16.452 "data_offset": 2048, 00:24:16.452 "data_size": 63488 00:24:16.452 }, 00:24:16.452 { 00:24:16.452 "name": "BaseBdev3", 00:24:16.452 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:16.452 "is_configured": true, 00:24:16.452 "data_offset": 2048, 00:24:16.452 "data_size": 63488 00:24:16.452 }, 00:24:16.452 { 00:24:16.452 "name": "BaseBdev4", 00:24:16.452 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:16.452 "is_configured": true, 00:24:16.452 "data_offset": 2048, 00:24:16.452 "data_size": 63488 00:24:16.452 } 00:24:16.452 ] 00:24:16.452 }' 00:24:16.452 05:04:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:16.452 05:04:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.452 05:04:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:16.452 05:04:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.452 05:04:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:17.830 05:04:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:17.830 05:04:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.830 05:04:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:17.830 05:04:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:17.830 05:04:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:17.830 05:04:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:17.830 05:04:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.830 05:04:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.830 05:04:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:17.830 "name": "raid_bdev1", 00:24:17.830 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:17.830 "strip_size_kb": 64, 00:24:17.830 "state": "online", 00:24:17.830 "raid_level": "raid5f", 00:24:17.830 "superblock": true, 00:24:17.830 "num_base_bdevs": 4, 00:24:17.830 "num_base_bdevs_discovered": 4, 00:24:17.830 "num_base_bdevs_operational": 4, 00:24:17.830 "process": { 00:24:17.830 "type": "rebuild", 00:24:17.830 "target": "spare", 00:24:17.830 "progress": { 00:24:17.830 "blocks": 147840, 00:24:17.830 "percent": 77 00:24:17.830 } 00:24:17.830 }, 00:24:17.830 "base_bdevs_list": [ 00:24:17.830 { 00:24:17.830 "name": "spare", 00:24:17.830 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:17.830 "is_configured": true, 00:24:17.830 "data_offset": 2048, 00:24:17.830 "data_size": 63488 00:24:17.830 }, 00:24:17.830 { 00:24:17.830 "name": "BaseBdev2", 00:24:17.830 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:17.830 "is_configured": true, 00:24:17.830 "data_offset": 2048, 00:24:17.830 "data_size": 63488 00:24:17.830 }, 00:24:17.830 { 00:24:17.830 "name": "BaseBdev3", 00:24:17.830 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:17.830 "is_configured": true, 00:24:17.830 "data_offset": 2048, 00:24:17.830 "data_size": 63488 00:24:17.830 }, 00:24:17.830 { 00:24:17.830 "name": "BaseBdev4", 00:24:17.830 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:17.830 "is_configured": true, 00:24:17.830 "data_offset": 2048, 00:24:17.830 "data_size": 63488 00:24:17.830 } 00:24:17.830 ] 00:24:17.830 }' 00:24:17.830 05:04:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.830 05:04:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:17.830 05:04:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:17.830 05:04:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:17.830 05:04:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:18.768 05:04:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:18.768 05:04:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.768 05:04:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:18.768 05:04:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:18.768 05:04:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:18.768 05:04:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:18.768 05:04:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.768 05:04:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.027 05:04:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:19.027 "name": "raid_bdev1", 00:24:19.027 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:19.027 "strip_size_kb": 64, 00:24:19.027 "state": "online", 00:24:19.027 "raid_level": "raid5f", 00:24:19.027 "superblock": true, 00:24:19.027 "num_base_bdevs": 4, 00:24:19.027 "num_base_bdevs_discovered": 4, 00:24:19.027 "num_base_bdevs_operational": 4, 00:24:19.028 "process": { 00:24:19.028 "type": "rebuild", 00:24:19.028 "target": "spare", 00:24:19.028 "progress": { 00:24:19.028 "blocks": 170880, 00:24:19.028 "percent": 89 00:24:19.028 } 00:24:19.028 }, 00:24:19.028 "base_bdevs_list": [ 00:24:19.028 { 00:24:19.028 "name": "spare", 00:24:19.028 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:19.028 "is_configured": true, 00:24:19.028 "data_offset": 2048, 00:24:19.028 "data_size": 63488 00:24:19.028 }, 00:24:19.028 { 00:24:19.028 "name": "BaseBdev2", 00:24:19.028 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:19.028 "is_configured": true, 00:24:19.028 "data_offset": 2048, 00:24:19.028 "data_size": 63488 00:24:19.028 }, 00:24:19.028 { 00:24:19.028 "name": "BaseBdev3", 00:24:19.028 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:19.028 "is_configured": true, 00:24:19.028 "data_offset": 2048, 00:24:19.028 "data_size": 63488 00:24:19.028 }, 00:24:19.028 { 00:24:19.028 "name": "BaseBdev4", 00:24:19.028 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:19.028 "is_configured": true, 00:24:19.028 "data_offset": 2048, 00:24:19.028 "data_size": 63488 00:24:19.028 } 00:24:19.028 ] 00:24:19.028 }' 00:24:19.028 05:04:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:19.028 05:04:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:19.028 05:04:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:19.028 05:04:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:19.028 05:04:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:19.966 05:04:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:19.966 05:04:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:19.966 05:04:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:19.966 05:04:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:19.966 05:04:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:19.966 05:04:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:19.966 05:04:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.966 05:04:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.966 [2024-11-18 05:04:43.474423] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:19.966 [2024-11-18 05:04:43.474493] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:19.966 [2024-11-18 05:04:43.474671] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:20.225 "name": "raid_bdev1", 00:24:20.225 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:20.225 "strip_size_kb": 64, 00:24:20.225 "state": "online", 00:24:20.225 "raid_level": "raid5f", 00:24:20.225 "superblock": true, 00:24:20.225 "num_base_bdevs": 4, 00:24:20.225 "num_base_bdevs_discovered": 4, 00:24:20.225 "num_base_bdevs_operational": 4, 00:24:20.225 "base_bdevs_list": [ 00:24:20.225 { 00:24:20.225 "name": "spare", 00:24:20.225 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:20.225 "is_configured": true, 00:24:20.225 "data_offset": 2048, 00:24:20.225 "data_size": 63488 00:24:20.225 }, 00:24:20.225 { 00:24:20.225 "name": "BaseBdev2", 00:24:20.225 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:20.225 "is_configured": true, 00:24:20.225 "data_offset": 2048, 00:24:20.225 "data_size": 63488 00:24:20.225 }, 00:24:20.225 { 00:24:20.225 "name": "BaseBdev3", 00:24:20.225 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:20.225 "is_configured": true, 00:24:20.225 "data_offset": 2048, 00:24:20.225 "data_size": 63488 00:24:20.225 }, 00:24:20.225 { 00:24:20.225 "name": "BaseBdev4", 00:24:20.225 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:20.225 "is_configured": true, 00:24:20.225 "data_offset": 2048, 00:24:20.225 "data_size": 63488 00:24:20.225 } 00:24:20.225 ] 00:24:20.225 }' 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@660 -- # break 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.225 05:04:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:20.485 "name": "raid_bdev1", 00:24:20.485 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:20.485 "strip_size_kb": 64, 00:24:20.485 "state": "online", 00:24:20.485 "raid_level": "raid5f", 00:24:20.485 "superblock": true, 00:24:20.485 "num_base_bdevs": 4, 00:24:20.485 "num_base_bdevs_discovered": 4, 00:24:20.485 "num_base_bdevs_operational": 4, 00:24:20.485 "base_bdevs_list": [ 00:24:20.485 { 00:24:20.485 "name": "spare", 00:24:20.485 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:20.485 "is_configured": true, 00:24:20.485 "data_offset": 2048, 00:24:20.485 "data_size": 63488 00:24:20.485 }, 00:24:20.485 { 00:24:20.485 "name": "BaseBdev2", 00:24:20.485 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:20.485 "is_configured": true, 00:24:20.485 "data_offset": 2048, 00:24:20.485 "data_size": 63488 00:24:20.485 }, 00:24:20.485 { 00:24:20.485 "name": "BaseBdev3", 00:24:20.485 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:20.485 "is_configured": true, 00:24:20.485 "data_offset": 2048, 00:24:20.485 "data_size": 63488 00:24:20.485 }, 00:24:20.485 { 00:24:20.485 "name": "BaseBdev4", 00:24:20.485 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:20.485 "is_configured": true, 00:24:20.485 "data_offset": 2048, 00:24:20.485 "data_size": 63488 00:24:20.485 } 00:24:20.485 ] 00:24:20.485 }' 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.485 05:04:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.745 05:04:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:20.745 "name": "raid_bdev1", 00:24:20.745 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:20.745 "strip_size_kb": 64, 00:24:20.745 "state": "online", 00:24:20.745 "raid_level": "raid5f", 00:24:20.745 "superblock": true, 00:24:20.745 "num_base_bdevs": 4, 00:24:20.745 "num_base_bdevs_discovered": 4, 00:24:20.745 "num_base_bdevs_operational": 4, 00:24:20.745 "base_bdevs_list": [ 00:24:20.745 { 00:24:20.745 "name": "spare", 00:24:20.745 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:20.745 "is_configured": true, 00:24:20.745 "data_offset": 2048, 00:24:20.745 "data_size": 63488 00:24:20.745 }, 00:24:20.745 { 00:24:20.745 "name": "BaseBdev2", 00:24:20.745 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:20.745 "is_configured": true, 00:24:20.745 "data_offset": 2048, 00:24:20.745 "data_size": 63488 00:24:20.745 }, 00:24:20.745 { 00:24:20.745 "name": "BaseBdev3", 00:24:20.745 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:20.745 "is_configured": true, 00:24:20.745 "data_offset": 2048, 00:24:20.745 "data_size": 63488 00:24:20.745 }, 00:24:20.745 { 00:24:20.745 "name": "BaseBdev4", 00:24:20.745 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:20.745 "is_configured": true, 00:24:20.745 "data_offset": 2048, 00:24:20.745 "data_size": 63488 00:24:20.745 } 00:24:20.745 ] 00:24:20.745 }' 00:24:20.745 05:04:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:20.745 05:04:44 -- common/autotest_common.sh@10 -- # set +x 00:24:21.004 05:04:44 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:21.264 [2024-11-18 05:04:44.668626] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:21.264 [2024-11-18 05:04:44.668663] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:21.264 [2024-11-18 05:04:44.668740] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:21.264 [2024-11-18 05:04:44.668838] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:21.264 [2024-11-18 05:04:44.668852] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:24:21.264 05:04:44 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.264 05:04:44 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:21.523 05:04:44 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:21.523 05:04:44 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:21.523 05:04:44 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:21.523 05:04:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:21.523 05:04:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:21.523 05:04:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:21.523 05:04:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:21.523 05:04:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:21.523 05:04:44 -- bdev/nbd_common.sh@12 -- # local i 00:24:21.523 05:04:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:21.523 05:04:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:21.523 05:04:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:21.782 /dev/nbd0 00:24:21.782 05:04:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:21.782 05:04:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:21.782 05:04:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:21.782 05:04:45 -- common/autotest_common.sh@867 -- # local i 00:24:21.782 05:04:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:21.782 05:04:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:21.782 05:04:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:21.782 05:04:45 -- common/autotest_common.sh@871 -- # break 00:24:21.782 05:04:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:21.782 05:04:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:21.782 05:04:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:21.782 1+0 records in 00:24:21.782 1+0 records out 00:24:21.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221671 s, 18.5 MB/s 00:24:21.782 05:04:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.782 05:04:45 -- common/autotest_common.sh@884 -- # size=4096 00:24:21.782 05:04:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.782 05:04:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:21.782 05:04:45 -- common/autotest_common.sh@887 -- # return 0 00:24:21.782 05:04:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:21.782 05:04:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:21.782 05:04:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:22.041 /dev/nbd1 00:24:22.041 05:04:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:22.041 05:04:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:22.041 05:04:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:22.041 05:04:45 -- common/autotest_common.sh@867 -- # local i 00:24:22.041 05:04:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:22.041 05:04:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:22.041 05:04:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:22.041 05:04:45 -- common/autotest_common.sh@871 -- # break 00:24:22.041 05:04:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:22.041 05:04:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:22.041 05:04:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:22.041 1+0 records in 00:24:22.041 1+0 records out 00:24:22.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333987 s, 12.3 MB/s 00:24:22.041 05:04:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:22.041 05:04:45 -- common/autotest_common.sh@884 -- # size=4096 00:24:22.041 05:04:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:22.041 05:04:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:22.041 05:04:45 -- common/autotest_common.sh@887 -- # return 0 00:24:22.041 05:04:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:22.041 05:04:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:22.041 05:04:45 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:22.300 05:04:45 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:22.300 05:04:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:22.300 05:04:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:22.300 05:04:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:22.300 05:04:45 -- bdev/nbd_common.sh@51 -- # local i 00:24:22.300 05:04:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:22.300 05:04:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:22.559 05:04:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:22.559 05:04:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:22.559 05:04:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:22.559 05:04:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:22.559 05:04:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:22.559 05:04:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:22.559 05:04:45 -- bdev/nbd_common.sh@41 -- # break 00:24:22.559 05:04:45 -- bdev/nbd_common.sh@45 -- # return 0 00:24:22.559 05:04:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:22.559 05:04:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:22.559 05:04:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:22.818 05:04:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:22.818 05:04:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:22.818 05:04:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:22.818 05:04:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:22.818 05:04:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:22.818 05:04:46 -- bdev/nbd_common.sh@41 -- # break 00:24:22.818 05:04:46 -- bdev/nbd_common.sh@45 -- # return 0 00:24:22.818 05:04:46 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:22.818 05:04:46 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:22.818 05:04:46 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:22.818 05:04:46 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:22.818 05:04:46 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:23.078 [2024-11-18 05:04:46.450888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:23.078 [2024-11-18 05:04:46.450947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.078 [2024-11-18 05:04:46.450979] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:24:23.078 [2024-11-18 05:04:46.450991] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.078 [2024-11-18 05:04:46.453051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.078 [2024-11-18 05:04:46.453084] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:23.078 [2024-11-18 05:04:46.453176] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:23.078 [2024-11-18 05:04:46.453250] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:23.078 BaseBdev1 00:24:23.078 05:04:46 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:23.078 05:04:46 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:23.078 05:04:46 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:23.356 05:04:46 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:23.631 [2024-11-18 05:04:46.943004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:23.631 [2024-11-18 05:04:46.943075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.631 [2024-11-18 05:04:46.943114] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:24:23.631 [2024-11-18 05:04:46.943130] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.631 [2024-11-18 05:04:46.943609] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.631 [2024-11-18 05:04:46.943631] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:23.631 [2024-11-18 05:04:46.943719] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:23.631 [2024-11-18 05:04:46.943734] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:23.632 [2024-11-18 05:04:46.943745] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:23.632 [2024-11-18 05:04:46.943766] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:24:23.632 [2024-11-18 05:04:46.943835] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:23.632 BaseBdev2 00:24:23.632 05:04:46 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:23.632 05:04:46 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:23.632 05:04:46 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:23.632 05:04:47 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:23.898 [2024-11-18 05:04:47.327113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:23.898 [2024-11-18 05:04:47.327173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.898 [2024-11-18 05:04:47.327214] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:24:23.898 [2024-11-18 05:04:47.327232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.898 [2024-11-18 05:04:47.327633] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.898 [2024-11-18 05:04:47.327658] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:23.898 [2024-11-18 05:04:47.327742] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:23.898 [2024-11-18 05:04:47.327782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:23.898 BaseBdev3 00:24:23.898 05:04:47 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:23.898 05:04:47 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:24:23.898 05:04:47 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:24:24.157 05:04:47 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:24.416 [2024-11-18 05:04:47.695207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:24.416 [2024-11-18 05:04:47.695280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:24.416 [2024-11-18 05:04:47.695308] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:24:24.416 [2024-11-18 05:04:47.695322] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:24.416 [2024-11-18 05:04:47.695739] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:24.416 [2024-11-18 05:04:47.695763] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:24.416 [2024-11-18 05:04:47.695844] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:24:24.416 [2024-11-18 05:04:47.695875] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:24.416 BaseBdev4 00:24:24.416 05:04:47 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:24.675 05:04:47 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:24.675 [2024-11-18 05:04:48.107302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:24.675 [2024-11-18 05:04:48.107360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:24.675 [2024-11-18 05:04:48.107390] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:24:24.675 [2024-11-18 05:04:48.107405] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:24.675 [2024-11-18 05:04:48.107848] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:24.675 [2024-11-18 05:04:48.107873] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:24.675 [2024-11-18 05:04:48.107964] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:24.675 [2024-11-18 05:04:48.108001] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:24.675 spare 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.675 05:04:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.934 [2024-11-18 05:04:48.208119] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:24:24.934 [2024-11-18 05:04:48.208154] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:24.934 [2024-11-18 05:04:48.208509] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000048a80 00:24:24.934 [2024-11-18 05:04:48.215119] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:24:24.934 [2024-11-18 05:04:48.215142] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:24:24.934 [2024-11-18 05:04:48.215596] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.934 05:04:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:24.934 "name": "raid_bdev1", 00:24:24.934 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:24.934 "strip_size_kb": 64, 00:24:24.934 "state": "online", 00:24:24.934 "raid_level": "raid5f", 00:24:24.934 "superblock": true, 00:24:24.934 "num_base_bdevs": 4, 00:24:24.934 "num_base_bdevs_discovered": 4, 00:24:24.934 "num_base_bdevs_operational": 4, 00:24:24.934 "base_bdevs_list": [ 00:24:24.934 { 00:24:24.934 "name": "spare", 00:24:24.934 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:24.934 "is_configured": true, 00:24:24.935 "data_offset": 2048, 00:24:24.935 "data_size": 63488 00:24:24.935 }, 00:24:24.935 { 00:24:24.935 "name": "BaseBdev2", 00:24:24.935 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:24.935 "is_configured": true, 00:24:24.935 "data_offset": 2048, 00:24:24.935 "data_size": 63488 00:24:24.935 }, 00:24:24.935 { 00:24:24.935 "name": "BaseBdev3", 00:24:24.935 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:24.935 "is_configured": true, 00:24:24.935 "data_offset": 2048, 00:24:24.935 "data_size": 63488 00:24:24.935 }, 00:24:24.935 { 00:24:24.935 "name": "BaseBdev4", 00:24:24.935 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:24.935 "is_configured": true, 00:24:24.935 "data_offset": 2048, 00:24:24.935 "data_size": 63488 00:24:24.935 } 00:24:24.935 ] 00:24:24.935 }' 00:24:24.935 05:04:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:24.935 05:04:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.194 05:04:48 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:25.194 05:04:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:25.194 05:04:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:25.194 05:04:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:25.194 05:04:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:25.194 05:04:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.194 05:04:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.453 05:04:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:25.453 "name": "raid_bdev1", 00:24:25.453 "uuid": "1a01c979-d615-4927-90f4-cfb9a303925f", 00:24:25.453 "strip_size_kb": 64, 00:24:25.453 "state": "online", 00:24:25.453 "raid_level": "raid5f", 00:24:25.453 "superblock": true, 00:24:25.453 "num_base_bdevs": 4, 00:24:25.453 "num_base_bdevs_discovered": 4, 00:24:25.453 "num_base_bdevs_operational": 4, 00:24:25.453 "base_bdevs_list": [ 00:24:25.453 { 00:24:25.453 "name": "spare", 00:24:25.453 "uuid": "dbf718d0-1936-590b-9c0d-ed3741ab4d80", 00:24:25.453 "is_configured": true, 00:24:25.453 "data_offset": 2048, 00:24:25.453 "data_size": 63488 00:24:25.453 }, 00:24:25.453 { 00:24:25.453 "name": "BaseBdev2", 00:24:25.453 "uuid": "1de8f964-3749-5019-bbe7-ee1a4f0317d9", 00:24:25.453 "is_configured": true, 00:24:25.453 "data_offset": 2048, 00:24:25.453 "data_size": 63488 00:24:25.453 }, 00:24:25.453 { 00:24:25.453 "name": "BaseBdev3", 00:24:25.453 "uuid": "5d582e3f-5512-5cdb-85f6-d390a5c10b9f", 00:24:25.453 "is_configured": true, 00:24:25.453 "data_offset": 2048, 00:24:25.453 "data_size": 63488 00:24:25.453 }, 00:24:25.453 { 00:24:25.453 "name": "BaseBdev4", 00:24:25.453 "uuid": "cca4b158-b6e6-5e3e-a144-f5bc7b4ca21f", 00:24:25.453 "is_configured": true, 00:24:25.453 "data_offset": 2048, 00:24:25.453 "data_size": 63488 00:24:25.453 } 00:24:25.453 ] 00:24:25.453 }' 00:24:25.453 05:04:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:25.453 05:04:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:25.453 05:04:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:25.453 05:04:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:25.453 05:04:48 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:25.453 05:04:48 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.712 05:04:49 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.712 05:04:49 -- bdev/bdev_raid.sh@709 -- # killprocess 86535 00:24:25.712 05:04:49 -- common/autotest_common.sh@936 -- # '[' -z 86535 ']' 00:24:25.712 05:04:49 -- common/autotest_common.sh@940 -- # kill -0 86535 00:24:25.712 05:04:49 -- common/autotest_common.sh@941 -- # uname 00:24:25.712 05:04:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:25.712 05:04:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86535 00:24:25.712 05:04:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:25.712 killing process with pid 86535 00:24:25.712 Received shutdown signal, test time was about 60.000000 seconds 00:24:25.712 00:24:25.712 Latency(us) 00:24:25.712 [2024-11-18T05:04:49.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.712 [2024-11-18T05:04:49.236Z] =================================================================================================================== 00:24:25.712 [2024-11-18T05:04:49.236Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:25.712 05:04:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:25.712 05:04:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86535' 00:24:25.712 05:04:49 -- common/autotest_common.sh@955 -- # kill 86535 00:24:25.712 [2024-11-18 05:04:49.207973] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:25.712 05:04:49 -- common/autotest_common.sh@960 -- # wait 86535 00:24:25.712 [2024-11-18 05:04:49.208057] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:25.712 [2024-11-18 05:04:49.208146] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:25.712 [2024-11-18 05:04:49.208163] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:24:26.279 [2024-11-18 05:04:49.532498] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:27.215 05:04:50 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:27.215 00:24:27.215 real 0m25.818s 00:24:27.215 user 0m37.003s 00:24:27.215 sys 0m3.157s 00:24:27.215 05:04:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:27.215 ************************************ 00:24:27.215 END TEST raid5f_rebuild_test_sb 00:24:27.215 ************************************ 00:24:27.215 05:04:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.215 05:04:50 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:24:27.215 ************************************ 00:24:27.215 END TEST bdev_raid 00:24:27.215 ************************************ 00:24:27.215 00:24:27.215 real 10m33.793s 00:24:27.215 user 16m21.366s 00:24:27.215 sys 1m34.469s 00:24:27.215 05:04:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:27.215 05:04:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.215 05:04:50 -- spdk/autotest.sh@184 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:24:27.215 05:04:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:27.215 05:04:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:27.215 05:04:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.215 ************************************ 00:24:27.215 START TEST bdevperf_config 00:24:27.215 ************************************ 00:24:27.215 05:04:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:24:27.215 * Looking for test storage... 00:24:27.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:24:27.215 05:04:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:27.215 05:04:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:27.215 05:04:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:27.215 05:04:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:27.215 05:04:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:27.215 05:04:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:27.215 05:04:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:27.215 05:04:50 -- scripts/common.sh@335 -- # IFS=.-: 00:24:27.215 05:04:50 -- scripts/common.sh@335 -- # read -ra ver1 00:24:27.215 05:04:50 -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.215 05:04:50 -- scripts/common.sh@336 -- # read -ra ver2 00:24:27.215 05:04:50 -- scripts/common.sh@337 -- # local 'op=<' 00:24:27.215 05:04:50 -- scripts/common.sh@339 -- # ver1_l=2 00:24:27.215 05:04:50 -- scripts/common.sh@340 -- # ver2_l=1 00:24:27.215 05:04:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:27.215 05:04:50 -- scripts/common.sh@343 -- # case "$op" in 00:24:27.215 05:04:50 -- scripts/common.sh@344 -- # : 1 00:24:27.215 05:04:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:27.215 05:04:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.215 05:04:50 -- scripts/common.sh@364 -- # decimal 1 00:24:27.215 05:04:50 -- scripts/common.sh@352 -- # local d=1 00:24:27.215 05:04:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.215 05:04:50 -- scripts/common.sh@354 -- # echo 1 00:24:27.215 05:04:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:27.215 05:04:50 -- scripts/common.sh@365 -- # decimal 2 00:24:27.215 05:04:50 -- scripts/common.sh@352 -- # local d=2 00:24:27.215 05:04:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.215 05:04:50 -- scripts/common.sh@354 -- # echo 2 00:24:27.215 05:04:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:27.215 05:04:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:27.215 05:04:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:27.215 05:04:50 -- scripts/common.sh@367 -- # return 0 00:24:27.215 05:04:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.215 05:04:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:27.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.215 --rc genhtml_branch_coverage=1 00:24:27.215 --rc genhtml_function_coverage=1 00:24:27.215 --rc genhtml_legend=1 00:24:27.215 --rc geninfo_all_blocks=1 00:24:27.215 --rc geninfo_unexecuted_blocks=1 00:24:27.216 00:24:27.216 ' 00:24:27.216 05:04:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:27.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.216 --rc genhtml_branch_coverage=1 00:24:27.216 --rc genhtml_function_coverage=1 00:24:27.216 --rc genhtml_legend=1 00:24:27.216 --rc geninfo_all_blocks=1 00:24:27.216 --rc geninfo_unexecuted_blocks=1 00:24:27.216 00:24:27.216 ' 00:24:27.216 05:04:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:27.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.216 --rc genhtml_branch_coverage=1 00:24:27.216 --rc genhtml_function_coverage=1 00:24:27.216 --rc genhtml_legend=1 00:24:27.216 --rc geninfo_all_blocks=1 00:24:27.216 --rc geninfo_unexecuted_blocks=1 00:24:27.216 00:24:27.216 ' 00:24:27.216 05:04:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:27.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.216 --rc genhtml_branch_coverage=1 00:24:27.216 --rc genhtml_function_coverage=1 00:24:27.216 --rc genhtml_legend=1 00:24:27.216 --rc geninfo_all_blocks=1 00:24:27.216 --rc geninfo_unexecuted_blocks=1 00:24:27.216 00:24:27.216 ' 00:24:27.216 05:04:50 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:24:27.216 05:04:50 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:24:27.216 05:04:50 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:24:27.216 05:04:50 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:27.216 05:04:50 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:27.216 05:04:50 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:24:27.216 05:04:50 -- bdevperf/common.sh@8 -- # local job_section=global 00:24:27.216 05:04:50 -- bdevperf/common.sh@9 -- # local rw=read 00:24:27.216 05:04:50 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:27.216 05:04:50 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:27.216 05:04:50 -- bdevperf/common.sh@13 -- # cat 00:24:27.216 05:04:50 -- bdevperf/common.sh@18 -- # job='[global]' 00:24:27.216 00:24:27.216 05:04:50 -- bdevperf/common.sh@19 -- # echo 00:24:27.216 05:04:50 -- bdevperf/common.sh@20 -- # cat 00:24:27.216 05:04:50 -- bdevperf/test_config.sh@18 -- # create_job job0 00:24:27.216 05:04:50 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:27.216 05:04:50 -- bdevperf/common.sh@9 -- # local rw= 00:24:27.216 05:04:50 -- bdevperf/common.sh@10 -- # local filename= 00:24:27.216 05:04:50 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:27.216 05:04:50 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:27.216 00:24:27.216 05:04:50 -- bdevperf/common.sh@19 -- # echo 00:24:27.216 05:04:50 -- bdevperf/common.sh@20 -- # cat 00:24:27.475 05:04:50 -- bdevperf/test_config.sh@19 -- # create_job job1 00:24:27.475 05:04:50 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:27.475 05:04:50 -- bdevperf/common.sh@9 -- # local rw= 00:24:27.475 05:04:50 -- bdevperf/common.sh@10 -- # local filename= 00:24:27.475 05:04:50 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:27.475 00:24:27.475 05:04:50 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:27.475 05:04:50 -- bdevperf/common.sh@19 -- # echo 00:24:27.475 05:04:50 -- bdevperf/common.sh@20 -- # cat 00:24:27.475 05:04:50 -- bdevperf/test_config.sh@20 -- # create_job job2 00:24:27.475 05:04:50 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:27.475 05:04:50 -- bdevperf/common.sh@9 -- # local rw= 00:24:27.475 05:04:50 -- bdevperf/common.sh@10 -- # local filename= 00:24:27.475 05:04:50 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:27.475 05:04:50 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:27.475 00:24:27.475 05:04:50 -- bdevperf/common.sh@19 -- # echo 00:24:27.475 05:04:50 -- bdevperf/common.sh@20 -- # cat 00:24:27.475 05:04:50 -- bdevperf/test_config.sh@21 -- # create_job job3 00:24:27.475 05:04:50 -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:27.475 05:04:50 -- bdevperf/common.sh@9 -- # local rw= 00:24:27.475 05:04:50 -- bdevperf/common.sh@10 -- # local filename= 00:24:27.475 00:24:27.475 05:04:50 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:27.475 05:04:50 -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:27.475 05:04:50 -- bdevperf/common.sh@19 -- # echo 00:24:27.475 05:04:50 -- bdevperf/common.sh@20 -- # cat 00:24:27.475 05:04:50 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:31.667 05:04:54 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-11-18 05:04:50.814327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:31.667 [2024-11-18 05:04:50.814498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87236 ] 00:24:31.667 Using job config with 4 jobs 00:24:31.667 [2024-11-18 05:04:50.984080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.667 [2024-11-18 05:04:51.138812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.667 cpumask for '\''job0'\'' is too big 00:24:31.667 cpumask for '\''job1'\'' is too big 00:24:31.667 cpumask for '\''job2'\'' is too big 00:24:31.667 cpumask for '\''job3'\'' is too big 00:24:31.667 Running I/O for 2 seconds... 00:24:31.667 00:24:31.667 Latency(us) 00:24:31.667 [2024-11-18T05:04:55.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.667 [2024-11-18T05:04:55.191Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.667 Malloc0 : 2.01 31083.75 30.36 0.00 0.00 8228.80 1452.22 12690.15 00:24:31.667 [2024-11-18T05:04:55.191Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.667 Malloc0 : 2.02 31097.00 30.37 0.00 0.00 8209.84 1422.43 11260.28 00:24:31.667 [2024-11-18T05:04:55.191Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.667 Malloc0 : 2.02 31076.97 30.35 0.00 0.00 8201.25 1474.56 10783.65 00:24:31.667 [2024-11-18T05:04:55.191Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.667 Malloc0 : 2.02 31056.90 30.33 0.00 0.00 8192.84 1429.88 10604.92 00:24:31.667 [2024-11-18T05:04:55.192Z] =================================================================================================================== 00:24:31.668 [2024-11-18T05:04:55.192Z] Total : 124314.61 121.40 0.00 0.00 8208.16 1422.43 12690.15' 00:24:31.668 05:04:54 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-11-18 05:04:50.814327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:31.668 [2024-11-18 05:04:50.814498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87236 ] 00:24:31.668 Using job config with 4 jobs 00:24:31.668 [2024-11-18 05:04:50.984080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.668 [2024-11-18 05:04:51.138812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.668 cpumask for '\''job0'\'' is too big 00:24:31.668 cpumask for '\''job1'\'' is too big 00:24:31.668 cpumask for '\''job2'\'' is too big 00:24:31.668 cpumask for '\''job3'\'' is too big 00:24:31.668 Running I/O for 2 seconds... 00:24:31.668 00:24:31.668 Latency(us) 00:24:31.668 [2024-11-18T05:04:55.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.668 [2024-11-18T05:04:55.192Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.668 Malloc0 : 2.01 31083.75 30.36 0.00 0.00 8228.80 1452.22 12690.15 00:24:31.668 [2024-11-18T05:04:55.192Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.668 Malloc0 : 2.02 31097.00 30.37 0.00 0.00 8209.84 1422.43 11260.28 00:24:31.668 [2024-11-18T05:04:55.192Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.668 Malloc0 : 2.02 31076.97 30.35 0.00 0.00 8201.25 1474.56 10783.65 00:24:31.668 [2024-11-18T05:04:55.192Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.668 Malloc0 : 2.02 31056.90 30.33 0.00 0.00 8192.84 1429.88 10604.92 00:24:31.668 [2024-11-18T05:04:55.192Z] =================================================================================================================== 00:24:31.668 [2024-11-18T05:04:55.192Z] Total : 124314.61 121.40 0.00 0.00 8208.16 1422.43 12690.15' 00:24:31.668 05:04:54 -- bdevperf/common.sh@32 -- # echo '[2024-11-18 05:04:50.814327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:31.668 [2024-11-18 05:04:50.814498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87236 ] 00:24:31.668 Using job config with 4 jobs 00:24:31.668 [2024-11-18 05:04:50.984080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.668 [2024-11-18 05:04:51.138812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.668 cpumask for '\''job0'\'' is too big 00:24:31.668 cpumask for '\''job1'\'' is too big 00:24:31.668 cpumask for '\''job2'\'' is too big 00:24:31.668 cpumask for '\''job3'\'' is too big 00:24:31.668 Running I/O for 2 seconds... 00:24:31.668 00:24:31.668 Latency(us) 00:24:31.668 [2024-11-18T05:04:55.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.668 [2024-11-18T05:04:55.192Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.668 Malloc0 : 2.01 31083.75 30.36 0.00 0.00 8228.80 1452.22 12690.15 00:24:31.668 [2024-11-18T05:04:55.192Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.668 Malloc0 : 2.02 31097.00 30.37 0.00 0.00 8209.84 1422.43 11260.28 00:24:31.668 [2024-11-18T05:04:55.192Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.668 Malloc0 : 2.02 31076.97 30.35 0.00 0.00 8201.25 1474.56 10783.65 00:24:31.668 [2024-11-18T05:04:55.192Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:31.668 Malloc0 : 2.02 31056.90 30.33 0.00 0.00 8192.84 1429.88 10604.92 00:24:31.668 [2024-11-18T05:04:55.192Z] =================================================================================================================== 00:24:31.668 [2024-11-18T05:04:55.192Z] Total : 124314.61 121.40 0.00 0.00 8208.16 1422.43 12690.15' 00:24:31.668 05:04:54 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:31.668 05:04:54 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:31.668 05:04:54 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:24:31.668 05:04:54 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:31.668 [2024-11-18 05:04:54.657364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:31.668 [2024-11-18 05:04:54.657541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87284 ] 00:24:31.668 [2024-11-18 05:04:54.826590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.668 [2024-11-18 05:04:54.989180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.927 cpumask for 'job0' is too big 00:24:31.927 cpumask for 'job1' is too big 00:24:31.927 cpumask for 'job2' is too big 00:24:31.927 cpumask for 'job3' is too big 00:24:35.213 05:04:58 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:24:35.213 Running I/O for 2 seconds... 00:24:35.213 00:24:35.213 Latency(us) 00:24:35.213 [2024-11-18T05:04:58.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.213 [2024-11-18T05:04:58.737Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:35.213 Malloc0 : 2.01 31390.55 30.65 0.00 0.00 8149.02 1534.14 12809.31 00:24:35.213 [2024-11-18T05:04:58.737Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:35.213 Malloc0 : 2.02 31369.83 30.63 0.00 0.00 8139.98 1392.64 11379.43 00:24:35.213 [2024-11-18T05:04:58.737Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:35.213 Malloc0 : 2.02 31349.62 30.61 0.00 0.00 8130.16 1429.88 11379.43 00:24:35.213 [2024-11-18T05:04:58.737Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:35.213 Malloc0 : 2.02 31329.29 30.60 0.00 0.00 8121.24 1407.53 10843.23 00:24:35.213 [2024-11-18T05:04:58.737Z] =================================================================================================================== 00:24:35.213 [2024-11-18T05:04:58.737Z] Total : 125439.29 122.50 0.00 0.00 8135.10 1392.64 12809.31' 00:24:35.213 05:04:58 -- bdevperf/test_config.sh@27 -- # cleanup 00:24:35.213 05:04:58 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:35.213 05:04:58 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:24:35.213 05:04:58 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:35.213 05:04:58 -- bdevperf/common.sh@9 -- # local rw=write 00:24:35.213 05:04:58 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:35.213 05:04:58 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:35.213 05:04:58 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:35.213 00:24:35.213 05:04:58 -- bdevperf/common.sh@19 -- # echo 00:24:35.213 05:04:58 -- bdevperf/common.sh@20 -- # cat 00:24:35.213 05:04:58 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:24:35.213 05:04:58 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:35.213 05:04:58 -- bdevperf/common.sh@9 -- # local rw=write 00:24:35.213 05:04:58 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:35.213 05:04:58 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:35.213 05:04:58 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:35.213 00:24:35.213 05:04:58 -- bdevperf/common.sh@19 -- # echo 00:24:35.213 05:04:58 -- bdevperf/common.sh@20 -- # cat 00:24:35.213 05:04:58 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:24:35.213 05:04:58 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:35.213 05:04:58 -- bdevperf/common.sh@9 -- # local rw=write 00:24:35.213 05:04:58 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:35.213 05:04:58 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:35.213 05:04:58 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:35.213 00:24:35.213 05:04:58 -- bdevperf/common.sh@19 -- # echo 00:24:35.213 05:04:58 -- bdevperf/common.sh@20 -- # cat 00:24:35.213 05:04:58 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:39.404 05:05:02 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-11-18 05:04:58.521734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:39.404 [2024-11-18 05:04:58.521909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87331 ] 00:24:39.404 Using job config with 3 jobs 00:24:39.404 [2024-11-18 05:04:58.691281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.404 [2024-11-18 05:04:58.851773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.404 cpumask for '\''job0'\'' is too big 00:24:39.404 cpumask for '\''job1'\'' is too big 00:24:39.404 cpumask for '\''job2'\'' is too big 00:24:39.404 Running I/O for 2 seconds... 00:24:39.404 00:24:39.404 Latency(us) 00:24:39.404 [2024-11-18T05:05:02.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.404 [2024-11-18T05:05:02.928Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:39.404 Malloc0 : 2.01 41648.55 40.67 0.00 0.00 6140.89 1429.88 9830.40 00:24:39.404 [2024-11-18T05:05:02.928Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:39.404 Malloc0 : 2.01 41620.80 40.65 0.00 0.00 6134.43 1429.88 8817.57 00:24:39.404 [2024-11-18T05:05:02.928Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:39.404 Malloc0 : 2.01 41593.52 40.62 0.00 0.00 6127.05 1571.37 8996.31 00:24:39.404 [2024-11-18T05:05:02.928Z] =================================================================================================================== 00:24:39.404 [2024-11-18T05:05:02.928Z] Total : 124862.87 121.94 0.00 0.00 6134.13 1429.88 9830.40' 00:24:39.404 05:05:02 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-11-18 05:04:58.521734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:39.404 [2024-11-18 05:04:58.521909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87331 ] 00:24:39.404 Using job config with 3 jobs 00:24:39.404 [2024-11-18 05:04:58.691281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.404 [2024-11-18 05:04:58.851773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.404 cpumask for '\''job0'\'' is too big 00:24:39.404 cpumask for '\''job1'\'' is too big 00:24:39.404 cpumask for '\''job2'\'' is too big 00:24:39.404 Running I/O for 2 seconds... 00:24:39.404 00:24:39.404 Latency(us) 00:24:39.404 [2024-11-18T05:05:02.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.404 [2024-11-18T05:05:02.928Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:39.404 Malloc0 : 2.01 41648.55 40.67 0.00 0.00 6140.89 1429.88 9830.40 00:24:39.404 [2024-11-18T05:05:02.928Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:39.404 Malloc0 : 2.01 41620.80 40.65 0.00 0.00 6134.43 1429.88 8817.57 00:24:39.404 [2024-11-18T05:05:02.928Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:39.404 Malloc0 : 2.01 41593.52 40.62 0.00 0.00 6127.05 1571.37 8996.31 00:24:39.404 [2024-11-18T05:05:02.928Z] =================================================================================================================== 00:24:39.404 [2024-11-18T05:05:02.928Z] Total : 124862.87 121.94 0.00 0.00 6134.13 1429.88 9830.40' 00:24:39.404 05:05:02 -- bdevperf/common.sh@32 -- # echo '[2024-11-18 05:04:58.521734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:39.404 [2024-11-18 05:04:58.521909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87331 ] 00:24:39.404 Using job config with 3 jobs 00:24:39.404 [2024-11-18 05:04:58.691281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.404 [2024-11-18 05:04:58.851773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.404 cpumask for '\''job0'\'' is too big 00:24:39.404 cpumask for '\''job1'\'' is too big 00:24:39.404 cpumask for '\''job2'\'' is too big 00:24:39.404 Running I/O for 2 seconds... 00:24:39.404 00:24:39.404 Latency(us) 00:24:39.404 [2024-11-18T05:05:02.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.404 [2024-11-18T05:05:02.928Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:39.404 Malloc0 : 2.01 41648.55 40.67 0.00 0.00 6140.89 1429.88 9830.40 00:24:39.404 [2024-11-18T05:05:02.928Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:39.404 Malloc0 : 2.01 41620.80 40.65 0.00 0.00 6134.43 1429.88 8817.57 00:24:39.404 [2024-11-18T05:05:02.928Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:39.404 Malloc0 : 2.01 41593.52 40.62 0.00 0.00 6127.05 1571.37 8996.31 00:24:39.404 [2024-11-18T05:05:02.928Z] =================================================================================================================== 00:24:39.404 [2024-11-18T05:05:02.928Z] Total : 124862.87 121.94 0.00 0.00 6134.13 1429.88 9830.40' 00:24:39.404 05:05:02 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:39.404 05:05:02 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:39.404 05:05:02 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:24:39.404 05:05:02 -- bdevperf/test_config.sh@35 -- # cleanup 00:24:39.404 05:05:02 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:39.404 05:05:02 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:24:39.404 05:05:02 -- bdevperf/common.sh@8 -- # local job_section=global 00:24:39.404 05:05:02 -- bdevperf/common.sh@9 -- # local rw=rw 00:24:39.404 05:05:02 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:24:39.404 05:05:02 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:39.404 05:05:02 -- bdevperf/common.sh@13 -- # cat 00:24:39.404 05:05:02 -- bdevperf/common.sh@18 -- # job='[global]' 00:24:39.404 00:24:39.404 05:05:02 -- bdevperf/common.sh@19 -- # echo 00:24:39.404 05:05:02 -- bdevperf/common.sh@20 -- # cat 00:24:39.404 05:05:02 -- bdevperf/test_config.sh@38 -- # create_job job0 00:24:39.404 05:05:02 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:39.404 05:05:02 -- bdevperf/common.sh@9 -- # local rw= 00:24:39.404 05:05:02 -- bdevperf/common.sh@10 -- # local filename= 00:24:39.404 05:05:02 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:39.404 05:05:02 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:39.404 00:24:39.404 05:05:02 -- bdevperf/common.sh@19 -- # echo 00:24:39.404 05:05:02 -- bdevperf/common.sh@20 -- # cat 00:24:39.405 05:05:02 -- bdevperf/test_config.sh@39 -- # create_job job1 00:24:39.405 05:05:02 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:39.405 05:05:02 -- bdevperf/common.sh@9 -- # local rw= 00:24:39.405 05:05:02 -- bdevperf/common.sh@10 -- # local filename= 00:24:39.405 05:05:02 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:39.405 05:05:02 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:39.405 00:24:39.405 05:05:02 -- bdevperf/common.sh@19 -- # echo 00:24:39.405 05:05:02 -- bdevperf/common.sh@20 -- # cat 00:24:39.405 05:05:02 -- bdevperf/test_config.sh@40 -- # create_job job2 00:24:39.405 05:05:02 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:39.405 05:05:02 -- bdevperf/common.sh@9 -- # local rw= 00:24:39.405 05:05:02 -- bdevperf/common.sh@10 -- # local filename= 00:24:39.405 05:05:02 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:39.405 00:24:39.405 05:05:02 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:39.405 05:05:02 -- bdevperf/common.sh@19 -- # echo 00:24:39.405 05:05:02 -- bdevperf/common.sh@20 -- # cat 00:24:39.405 05:05:02 -- bdevperf/test_config.sh@41 -- # create_job job3 00:24:39.405 05:05:02 -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:39.405 05:05:02 -- bdevperf/common.sh@9 -- # local rw= 00:24:39.405 05:05:02 -- bdevperf/common.sh@10 -- # local filename= 00:24:39.405 05:05:02 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:39.405 00:24:39.405 05:05:02 -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:39.405 05:05:02 -- bdevperf/common.sh@19 -- # echo 00:24:39.405 05:05:02 -- bdevperf/common.sh@20 -- # cat 00:24:39.405 05:05:02 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:42.693 05:05:06 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-11-18 05:05:02.368797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:42.693 [2024-11-18 05:05:02.368910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87381 ] 00:24:42.693 Using job config with 4 jobs 00:24:42.693 [2024-11-18 05:05:02.519102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.693 [2024-11-18 05:05:02.680894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.693 cpumask for '\''job0'\'' is too big 00:24:42.693 cpumask for '\''job1'\'' is too big 00:24:42.693 cpumask for '\''job2'\'' is too big 00:24:42.693 cpumask for '\''job3'\'' is too big 00:24:42.693 Running I/O for 2 seconds... 00:24:42.693 00:24:42.693 Latency(us) 00:24:42.693 [2024-11-18T05:05:06.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.693 [2024-11-18T05:05:06.217Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.693 Malloc0 : 2.03 15524.81 15.16 0.00 0.00 16468.29 3232.12 26571.87 00:24:42.693 [2024-11-18T05:05:06.217Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.693 Malloc1 : 2.03 15513.97 15.15 0.00 0.00 16469.49 4140.68 26214.40 00:24:42.693 [2024-11-18T05:05:06.217Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.693 Malloc0 : 2.03 15503.46 15.14 0.00 0.00 16432.45 3083.17 22758.87 00:24:42.693 [2024-11-18T05:05:06.217Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.693 Malloc1 : 2.03 15492.62 15.13 0.00 0.00 16428.20 3783.21 22758.87 00:24:42.693 [2024-11-18T05:05:06.217Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.693 Malloc0 : 2.03 15482.65 15.12 0.00 0.00 16392.03 3038.49 20852.36 00:24:42.693 [2024-11-18T05:05:06.217Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc1 : 2.04 15471.99 15.11 0.00 0.00 16391.95 3619.37 20733.21 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc0 : 2.04 15462.09 15.10 0.00 0.00 16356.02 2978.91 21328.99 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc1 : 2.04 15560.46 15.20 0.00 0.00 16240.17 860.16 21567.30 00:24:42.694 [2024-11-18T05:05:06.218Z] =================================================================================================================== 00:24:42.694 [2024-11-18T05:05:06.218Z] Total : 124012.06 121.11 0.00 0.00 16397.16 860.16 26571.87' 00:24:42.694 05:05:06 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-11-18 05:05:02.368797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:42.694 [2024-11-18 05:05:02.368910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87381 ] 00:24:42.694 Using job config with 4 jobs 00:24:42.694 [2024-11-18 05:05:02.519102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.694 [2024-11-18 05:05:02.680894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.694 cpumask for '\''job0'\'' is too big 00:24:42.694 cpumask for '\''job1'\'' is too big 00:24:42.694 cpumask for '\''job2'\'' is too big 00:24:42.694 cpumask for '\''job3'\'' is too big 00:24:42.694 Running I/O for 2 seconds... 00:24:42.694 00:24:42.694 Latency(us) 00:24:42.694 [2024-11-18T05:05:06.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc0 : 2.03 15524.81 15.16 0.00 0.00 16468.29 3232.12 26571.87 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc1 : 2.03 15513.97 15.15 0.00 0.00 16469.49 4140.68 26214.40 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc0 : 2.03 15503.46 15.14 0.00 0.00 16432.45 3083.17 22758.87 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc1 : 2.03 15492.62 15.13 0.00 0.00 16428.20 3783.21 22758.87 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc0 : 2.03 15482.65 15.12 0.00 0.00 16392.03 3038.49 20852.36 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc1 : 2.04 15471.99 15.11 0.00 0.00 16391.95 3619.37 20733.21 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc0 : 2.04 15462.09 15.10 0.00 0.00 16356.02 2978.91 21328.99 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc1 : 2.04 15560.46 15.20 0.00 0.00 16240.17 860.16 21567.30 00:24:42.694 [2024-11-18T05:05:06.218Z] =================================================================================================================== 00:24:42.694 [2024-11-18T05:05:06.218Z] Total : 124012.06 121.11 0.00 0.00 16397.16 860.16 26571.87' 00:24:42.694 05:05:06 -- bdevperf/common.sh@32 -- # echo '[2024-11-18 05:05:02.368797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:42.694 [2024-11-18 05:05:02.368910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87381 ] 00:24:42.694 Using job config with 4 jobs 00:24:42.694 [2024-11-18 05:05:02.519102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.694 [2024-11-18 05:05:02.680894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.694 cpumask for '\''job0'\'' is too big 00:24:42.694 cpumask for '\''job1'\'' is too big 00:24:42.694 cpumask for '\''job2'\'' is too big 00:24:42.694 cpumask for '\''job3'\'' is too big 00:24:42.694 Running I/O for 2 seconds... 00:24:42.694 00:24:42.694 Latency(us) 00:24:42.694 [2024-11-18T05:05:06.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc0 : 2.03 15524.81 15.16 0.00 0.00 16468.29 3232.12 26571.87 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc1 : 2.03 15513.97 15.15 0.00 0.00 16469.49 4140.68 26214.40 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc0 : 2.03 15503.46 15.14 0.00 0.00 16432.45 3083.17 22758.87 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc1 : 2.03 15492.62 15.13 0.00 0.00 16428.20 3783.21 22758.87 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc0 : 2.03 15482.65 15.12 0.00 0.00 16392.03 3038.49 20852.36 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc1 : 2.04 15471.99 15.11 0.00 0.00 16391.95 3619.37 20733.21 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc0 : 2.04 15462.09 15.10 0.00 0.00 16356.02 2978.91 21328.99 00:24:42.694 [2024-11-18T05:05:06.218Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:42.694 Malloc1 : 2.04 15560.46 15.20 0.00 0.00 16240.17 860.16 21567.30 00:24:42.694 [2024-11-18T05:05:06.218Z] =================================================================================================================== 00:24:42.694 [2024-11-18T05:05:06.218Z] Total : 124012.06 121.11 0.00 0.00 16397.16 860.16 26571.87' 00:24:42.694 05:05:06 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:42.694 05:05:06 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:42.694 05:05:06 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:24:42.694 05:05:06 -- bdevperf/test_config.sh@44 -- # cleanup 00:24:42.694 05:05:06 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:42.694 05:05:06 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:42.694 00:24:42.694 real 0m15.645s 00:24:42.694 user 0m14.165s 00:24:42.694 sys 0m0.998s 00:24:42.694 05:05:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:42.694 ************************************ 00:24:42.694 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:24:42.694 END TEST bdevperf_config 00:24:42.694 ************************************ 00:24:42.956 05:05:06 -- spdk/autotest.sh@185 -- # uname -s 00:24:42.956 05:05:06 -- spdk/autotest.sh@185 -- # [[ Linux == Linux ]] 00:24:42.956 05:05:06 -- spdk/autotest.sh@186 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:42.956 05:05:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:42.956 05:05:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:42.956 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:24:42.956 ************************************ 00:24:42.956 START TEST reactor_set_interrupt 00:24:42.956 ************************************ 00:24:42.956 05:05:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:42.956 * Looking for test storage... 00:24:42.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:42.956 05:05:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:42.956 05:05:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:42.956 05:05:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:42.956 05:05:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:42.956 05:05:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:42.956 05:05:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:42.956 05:05:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:42.956 05:05:06 -- scripts/common.sh@335 -- # IFS=.-: 00:24:42.956 05:05:06 -- scripts/common.sh@335 -- # read -ra ver1 00:24:42.956 05:05:06 -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.956 05:05:06 -- scripts/common.sh@336 -- # read -ra ver2 00:24:42.956 05:05:06 -- scripts/common.sh@337 -- # local 'op=<' 00:24:42.956 05:05:06 -- scripts/common.sh@339 -- # ver1_l=2 00:24:42.956 05:05:06 -- scripts/common.sh@340 -- # ver2_l=1 00:24:42.956 05:05:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:42.956 05:05:06 -- scripts/common.sh@343 -- # case "$op" in 00:24:42.956 05:05:06 -- scripts/common.sh@344 -- # : 1 00:24:42.956 05:05:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:42.956 05:05:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.956 05:05:06 -- scripts/common.sh@364 -- # decimal 1 00:24:42.956 05:05:06 -- scripts/common.sh@352 -- # local d=1 00:24:42.956 05:05:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.956 05:05:06 -- scripts/common.sh@354 -- # echo 1 00:24:42.956 05:05:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:42.956 05:05:06 -- scripts/common.sh@365 -- # decimal 2 00:24:42.956 05:05:06 -- scripts/common.sh@352 -- # local d=2 00:24:42.956 05:05:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.956 05:05:06 -- scripts/common.sh@354 -- # echo 2 00:24:42.956 05:05:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:42.956 05:05:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:42.956 05:05:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:42.956 05:05:06 -- scripts/common.sh@367 -- # return 0 00:24:42.956 05:05:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.956 05:05:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.956 --rc genhtml_branch_coverage=1 00:24:42.956 --rc genhtml_function_coverage=1 00:24:42.956 --rc genhtml_legend=1 00:24:42.956 --rc geninfo_all_blocks=1 00:24:42.956 --rc geninfo_unexecuted_blocks=1 00:24:42.956 00:24:42.956 ' 00:24:42.956 05:05:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.956 --rc genhtml_branch_coverage=1 00:24:42.956 --rc genhtml_function_coverage=1 00:24:42.956 --rc genhtml_legend=1 00:24:42.956 --rc geninfo_all_blocks=1 00:24:42.956 --rc geninfo_unexecuted_blocks=1 00:24:42.956 00:24:42.956 ' 00:24:42.956 05:05:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.956 --rc genhtml_branch_coverage=1 00:24:42.956 --rc genhtml_function_coverage=1 00:24:42.956 --rc genhtml_legend=1 00:24:42.956 --rc geninfo_all_blocks=1 00:24:42.956 --rc geninfo_unexecuted_blocks=1 00:24:42.956 00:24:42.956 ' 00:24:42.956 05:05:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:42.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.956 --rc genhtml_branch_coverage=1 00:24:42.956 --rc genhtml_function_coverage=1 00:24:42.956 --rc genhtml_legend=1 00:24:42.956 --rc geninfo_all_blocks=1 00:24:42.956 --rc geninfo_unexecuted_blocks=1 00:24:42.956 00:24:42.956 ' 00:24:42.956 05:05:06 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:24:42.956 05:05:06 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:42.956 05:05:06 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:42.956 05:05:06 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:42.956 05:05:06 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:24:42.956 05:05:06 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:42.956 05:05:06 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:24:42.956 05:05:06 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:42.956 05:05:06 -- common/autotest_common.sh@34 -- # set -e 00:24:42.956 05:05:06 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:42.956 05:05:06 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:42.956 05:05:06 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:42.956 05:05:06 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:42.957 05:05:06 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:42.957 05:05:06 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:24:42.957 05:05:06 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:24:42.957 05:05:06 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:42.957 05:05:06 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:42.957 05:05:06 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:42.957 05:05:06 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:42.957 05:05:06 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:42.957 05:05:06 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:42.957 05:05:06 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:42.957 05:05:06 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:42.957 05:05:06 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:42.957 05:05:06 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:42.957 05:05:06 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:42.957 05:05:06 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:42.957 05:05:06 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:42.957 05:05:06 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:42.957 05:05:06 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:42.957 05:05:06 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:42.957 05:05:06 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:42.957 05:05:06 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:42.957 05:05:06 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:42.957 05:05:06 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:42.957 05:05:06 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:42.957 05:05:06 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:42.957 05:05:06 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:42.957 05:05:06 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:42.957 05:05:06 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:42.957 05:05:06 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:42.957 05:05:06 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:42.957 05:05:06 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:42.957 05:05:06 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:42.957 05:05:06 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:42.957 05:05:06 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:42.957 05:05:06 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:42.957 05:05:06 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:24:42.957 05:05:06 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:24:42.957 05:05:06 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:42.957 05:05:06 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:42.957 05:05:06 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:42.957 05:05:06 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:42.957 05:05:06 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:42.957 05:05:06 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:24:42.957 05:05:06 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:42.957 05:05:06 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:42.957 05:05:06 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:24:42.957 05:05:06 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:24:42.957 05:05:06 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:42.957 05:05:06 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:24:42.957 05:05:06 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:24:42.957 05:05:06 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:24:42.957 05:05:06 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:24:42.957 05:05:06 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:24:42.957 05:05:06 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:24:42.957 05:05:06 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:24:42.957 05:05:06 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:24:42.957 05:05:06 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:24:42.957 05:05:06 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:24:42.957 05:05:06 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:24:42.957 05:05:06 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:24:42.957 05:05:06 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:24:42.957 05:05:06 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:24:42.957 05:05:06 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:24:42.957 05:05:06 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:24:42.957 05:05:06 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:24:42.957 05:05:06 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:42.957 05:05:06 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:24:42.957 05:05:06 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:24:42.957 05:05:06 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:24:42.957 05:05:06 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:24:42.957 05:05:06 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:24:42.957 05:05:06 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:24:42.957 05:05:06 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:24:42.957 05:05:06 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:24:42.957 05:05:06 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:24:42.957 05:05:06 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:24:42.957 05:05:06 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:42.957 05:05:06 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:24:42.957 05:05:06 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:24:42.957 05:05:06 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:42.957 05:05:06 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:42.957 05:05:06 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:24:42.957 05:05:06 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:24:42.957 05:05:06 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:24:42.957 05:05:06 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:24:42.957 05:05:06 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:24:42.957 05:05:06 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:24:42.957 05:05:06 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:42.957 05:05:06 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:42.957 05:05:06 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:42.957 05:05:06 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:42.957 05:05:06 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:42.957 05:05:06 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:42.957 05:05:06 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:24:42.957 05:05:06 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:42.957 #define SPDK_CONFIG_H 00:24:42.957 #define SPDK_CONFIG_APPS 1 00:24:42.957 #define SPDK_CONFIG_ARCH native 00:24:42.957 #define SPDK_CONFIG_ASAN 1 00:24:42.957 #undef SPDK_CONFIG_AVAHI 00:24:42.957 #undef SPDK_CONFIG_CET 00:24:42.957 #define SPDK_CONFIG_COVERAGE 1 00:24:42.957 #define SPDK_CONFIG_CROSS_PREFIX 00:24:42.957 #undef SPDK_CONFIG_CRYPTO 00:24:42.957 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:42.957 #undef SPDK_CONFIG_CUSTOMOCF 00:24:42.957 #undef SPDK_CONFIG_DAOS 00:24:42.957 #define SPDK_CONFIG_DAOS_DIR 00:24:42.957 #define SPDK_CONFIG_DEBUG 1 00:24:42.957 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:42.957 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:24:42.957 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:42.957 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:42.957 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:42.957 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:42.957 #define SPDK_CONFIG_EXAMPLES 1 00:24:42.957 #undef SPDK_CONFIG_FC 00:24:42.957 #define SPDK_CONFIG_FC_PATH 00:24:42.957 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:42.957 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:42.957 #undef SPDK_CONFIG_FUSE 00:24:42.957 #undef SPDK_CONFIG_FUZZER 00:24:42.957 #define SPDK_CONFIG_FUZZER_LIB 00:24:42.957 #undef SPDK_CONFIG_GOLANG 00:24:42.957 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:42.957 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:42.957 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:42.957 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:42.957 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:42.957 #define SPDK_CONFIG_IDXD 1 00:24:42.957 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:42.957 #undef SPDK_CONFIG_IPSEC_MB 00:24:42.957 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:42.957 #define SPDK_CONFIG_ISAL 1 00:24:42.957 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:42.957 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:42.957 #define SPDK_CONFIG_LIBDIR 00:24:42.957 #undef SPDK_CONFIG_LTO 00:24:42.957 #define SPDK_CONFIG_MAX_LCORES 00:24:42.957 #define SPDK_CONFIG_NVME_CUSE 1 00:24:42.957 #undef SPDK_CONFIG_OCF 00:24:42.957 #define SPDK_CONFIG_OCF_PATH 00:24:42.957 #define SPDK_CONFIG_OPENSSL_PATH 00:24:42.957 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:42.957 #undef SPDK_CONFIG_PGO_USE 00:24:42.957 #define SPDK_CONFIG_PREFIX /usr/local 00:24:42.957 #define SPDK_CONFIG_RAID5F 1 00:24:42.957 #undef SPDK_CONFIG_RBD 00:24:42.957 #define SPDK_CONFIG_RDMA 1 00:24:42.957 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:42.957 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:42.957 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:42.957 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:42.957 #undef SPDK_CONFIG_SHARED 00:24:42.957 #undef SPDK_CONFIG_SMA 00:24:42.957 #define SPDK_CONFIG_TESTS 1 00:24:42.957 #undef SPDK_CONFIG_TSAN 00:24:42.957 #define SPDK_CONFIG_UBLK 1 00:24:42.957 #define SPDK_CONFIG_UBSAN 1 00:24:42.957 #define SPDK_CONFIG_UNIT_TESTS 1 00:24:42.957 #undef SPDK_CONFIG_URING 00:24:42.958 #define SPDK_CONFIG_URING_PATH 00:24:42.958 #undef SPDK_CONFIG_URING_ZNS 00:24:42.958 #undef SPDK_CONFIG_USDT 00:24:42.958 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:42.958 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:42.958 #undef SPDK_CONFIG_VFIO_USER 00:24:42.958 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:42.958 #define SPDK_CONFIG_VHOST 1 00:24:42.958 #define SPDK_CONFIG_VIRTIO 1 00:24:42.958 #undef SPDK_CONFIG_VTUNE 00:24:42.958 #define SPDK_CONFIG_VTUNE_DIR 00:24:42.958 #define SPDK_CONFIG_WERROR 1 00:24:42.958 #define SPDK_CONFIG_WPDK_DIR 00:24:42.958 #undef SPDK_CONFIG_XNVME 00:24:42.958 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:42.958 05:05:06 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:42.958 05:05:06 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:42.958 05:05:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.958 05:05:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.958 05:05:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.958 05:05:06 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:42.958 05:05:06 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:42.958 05:05:06 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:42.958 05:05:06 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:42.958 05:05:06 -- paths/export.sh@6 -- # export PATH 00:24:42.958 05:05:06 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:42.958 05:05:06 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:42.958 05:05:06 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:42.958 05:05:06 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:42.958 05:05:06 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:42.958 05:05:06 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:24:42.958 05:05:06 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:24:42.958 05:05:06 -- pm/common@16 -- # TEST_TAG=N/A 00:24:42.958 05:05:06 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:24:42.958 05:05:06 -- common/autotest_common.sh@52 -- # : 1 00:24:42.958 05:05:06 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:24:42.958 05:05:06 -- common/autotest_common.sh@56 -- # : 0 00:24:42.958 05:05:06 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:42.958 05:05:06 -- common/autotest_common.sh@58 -- # : 0 00:24:42.958 05:05:06 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:24:42.958 05:05:06 -- common/autotest_common.sh@60 -- # : 1 00:24:42.958 05:05:06 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:42.958 05:05:06 -- common/autotest_common.sh@62 -- # : 1 00:24:42.958 05:05:06 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:24:43.219 05:05:06 -- common/autotest_common.sh@64 -- # : 00:24:43.219 05:05:06 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:24:43.219 05:05:06 -- common/autotest_common.sh@66 -- # : 0 00:24:43.219 05:05:06 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:24:43.219 05:05:06 -- common/autotest_common.sh@68 -- # : 0 00:24:43.219 05:05:06 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:24:43.219 05:05:06 -- common/autotest_common.sh@70 -- # : 0 00:24:43.219 05:05:06 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:24:43.219 05:05:06 -- common/autotest_common.sh@72 -- # : 0 00:24:43.219 05:05:06 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:43.220 05:05:06 -- common/autotest_common.sh@74 -- # : 1 00:24:43.220 05:05:06 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:24:43.220 05:05:06 -- common/autotest_common.sh@76 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:24:43.220 05:05:06 -- common/autotest_common.sh@78 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:24:43.220 05:05:06 -- common/autotest_common.sh@80 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:24:43.220 05:05:06 -- common/autotest_common.sh@82 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:24:43.220 05:05:06 -- common/autotest_common.sh@84 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:24:43.220 05:05:06 -- common/autotest_common.sh@86 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:24:43.220 05:05:06 -- common/autotest_common.sh@88 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:24:43.220 05:05:06 -- common/autotest_common.sh@90 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:43.220 05:05:06 -- common/autotest_common.sh@92 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:24:43.220 05:05:06 -- common/autotest_common.sh@94 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:24:43.220 05:05:06 -- common/autotest_common.sh@96 -- # : rdma 00:24:43.220 05:05:06 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:43.220 05:05:06 -- common/autotest_common.sh@98 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:24:43.220 05:05:06 -- common/autotest_common.sh@100 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:24:43.220 05:05:06 -- common/autotest_common.sh@102 -- # : 1 00:24:43.220 05:05:06 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:24:43.220 05:05:06 -- common/autotest_common.sh@104 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:24:43.220 05:05:06 -- common/autotest_common.sh@106 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:24:43.220 05:05:06 -- common/autotest_common.sh@108 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:24:43.220 05:05:06 -- common/autotest_common.sh@110 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:24:43.220 05:05:06 -- common/autotest_common.sh@112 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:43.220 05:05:06 -- common/autotest_common.sh@114 -- # : 1 00:24:43.220 05:05:06 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:24:43.220 05:05:06 -- common/autotest_common.sh@116 -- # : 1 00:24:43.220 05:05:06 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:24:43.220 05:05:06 -- common/autotest_common.sh@118 -- # : 00:24:43.220 05:05:06 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:43.220 05:05:06 -- common/autotest_common.sh@120 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:24:43.220 05:05:06 -- common/autotest_common.sh@122 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:24:43.220 05:05:06 -- common/autotest_common.sh@124 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:24:43.220 05:05:06 -- common/autotest_common.sh@126 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:24:43.220 05:05:06 -- common/autotest_common.sh@128 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:24:43.220 05:05:06 -- common/autotest_common.sh@130 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:24:43.220 05:05:06 -- common/autotest_common.sh@132 -- # : 00:24:43.220 05:05:06 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:24:43.220 05:05:06 -- common/autotest_common.sh@134 -- # : true 00:24:43.220 05:05:06 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:24:43.220 05:05:06 -- common/autotest_common.sh@136 -- # : 1 00:24:43.220 05:05:06 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:24:43.220 05:05:06 -- common/autotest_common.sh@138 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:24:43.220 05:05:06 -- common/autotest_common.sh@140 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:24:43.220 05:05:06 -- common/autotest_common.sh@142 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:24:43.220 05:05:06 -- common/autotest_common.sh@144 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:24:43.220 05:05:06 -- common/autotest_common.sh@146 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:24:43.220 05:05:06 -- common/autotest_common.sh@148 -- # : 00:24:43.220 05:05:06 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:24:43.220 05:05:06 -- common/autotest_common.sh@150 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:24:43.220 05:05:06 -- common/autotest_common.sh@152 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:24:43.220 05:05:06 -- common/autotest_common.sh@154 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:24:43.220 05:05:06 -- common/autotest_common.sh@156 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:24:43.220 05:05:06 -- common/autotest_common.sh@158 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:24:43.220 05:05:06 -- common/autotest_common.sh@160 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:24:43.220 05:05:06 -- common/autotest_common.sh@163 -- # : 00:24:43.220 05:05:06 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:24:43.220 05:05:06 -- common/autotest_common.sh@165 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:24:43.220 05:05:06 -- common/autotest_common.sh@167 -- # : 0 00:24:43.220 05:05:06 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:43.220 05:05:06 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:43.220 05:05:06 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:43.220 05:05:06 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:43.220 05:05:06 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:43.220 05:05:06 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:43.220 05:05:06 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:43.220 05:05:06 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:43.220 05:05:06 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:43.220 05:05:06 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:43.220 05:05:06 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:43.220 05:05:06 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:43.220 05:05:06 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:43.220 05:05:06 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:43.220 05:05:06 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:24:43.220 05:05:06 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:43.220 05:05:06 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:43.220 05:05:06 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:43.220 05:05:06 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:43.220 05:05:06 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:43.220 05:05:06 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:24:43.220 05:05:06 -- common/autotest_common.sh@196 -- # cat 00:24:43.220 05:05:06 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:24:43.220 05:05:06 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:43.221 05:05:06 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:43.221 05:05:06 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:43.221 05:05:06 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:43.221 05:05:06 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:24:43.221 05:05:06 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:24:43.221 05:05:06 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:43.221 05:05:06 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:43.221 05:05:06 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:43.221 05:05:06 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:43.221 05:05:06 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:24:43.221 05:05:06 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:24:43.221 05:05:06 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:43.221 05:05:06 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:43.221 05:05:06 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:43.221 05:05:06 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:43.221 05:05:06 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:43.221 05:05:06 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:43.221 05:05:06 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:24:43.221 05:05:06 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:24:43.221 05:05:06 -- common/autotest_common.sh@249 -- # _LCOV= 00:24:43.221 05:05:06 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:24:43.221 05:05:06 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:24:43.221 05:05:06 -- common/autotest_common.sh@255 -- # lcov_opt= 00:24:43.221 05:05:06 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:24:43.221 05:05:06 -- common/autotest_common.sh@259 -- # export valgrind= 00:24:43.221 05:05:06 -- common/autotest_common.sh@259 -- # valgrind= 00:24:43.221 05:05:06 -- common/autotest_common.sh@265 -- # uname -s 00:24:43.221 05:05:06 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:24:43.221 05:05:06 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:24:43.221 05:05:06 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:24:43.221 05:05:06 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:24:43.221 05:05:06 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@275 -- # MAKE=make 00:24:43.221 05:05:06 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:24:43.221 05:05:06 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:24:43.221 05:05:06 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:24:43.221 05:05:06 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:24:43.221 05:05:06 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:24:43.221 05:05:06 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:24:43.221 05:05:06 -- common/autotest_common.sh@319 -- # [[ -z 87460 ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@319 -- # kill -0 87460 00:24:43.221 05:05:06 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:24:43.221 05:05:06 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:24:43.221 05:05:06 -- common/autotest_common.sh@332 -- # local mount target_dir 00:24:43.221 05:05:06 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:24:43.221 05:05:06 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:24:43.221 05:05:06 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:24:43.221 05:05:06 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:24:43.221 05:05:06 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.zEGLr0 00:24:43.221 05:05:06 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:43.221 05:05:06 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.zEGLr0/tests/interrupt /tmp/spdk.zEGLr0 00:24:43.221 05:05:06 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:24:43.221 05:05:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:43.221 05:05:06 -- common/autotest_common.sh@328 -- # df -T 00:24:43.221 05:05:06 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=1249312768 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254027264 00:24:43.221 05:05:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=4714496 00:24:43.221 05:05:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=10279772160 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=19681529856 00:24:43.221 05:05:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=9384980480 00:24:43.221 05:05:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=6267523072 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6270115840 00:24:43.221 05:05:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=2592768 00:24:43.221 05:05:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:24:43.221 05:05:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:24:43.221 05:05:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda16 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=777306112 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=923156480 00:24:43.221 05:05:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=81207296 00:24:43.221 05:05:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=103000064 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:24:43.221 05:05:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=6395904 00:24:43.221 05:05:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=1254010880 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254023168 00:24:43.221 05:05:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:24:43.221 05:05:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:24:43.221 05:05:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=98691031040 00:24:43.221 05:05:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:24:43.221 05:05:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=1011748864 00:24:43.221 05:05:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:43.221 05:05:06 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:24:43.221 * Looking for test storage... 00:24:43.221 05:05:06 -- common/autotest_common.sh@369 -- # local target_space new_size 00:24:43.221 05:05:06 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:24:43.221 05:05:06 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:43.221 05:05:06 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:43.221 05:05:06 -- common/autotest_common.sh@373 -- # mount=/ 00:24:43.221 05:05:06 -- common/autotest_common.sh@375 -- # target_space=10279772160 00:24:43.221 05:05:06 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:24:43.221 05:05:06 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:24:43.221 05:05:06 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:24:43.221 05:05:06 -- common/autotest_common.sh@382 -- # new_size=11599572992 00:24:43.221 05:05:06 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:43.221 05:05:06 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:43.222 05:05:06 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:43.222 05:05:06 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:43.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:43.222 05:05:06 -- common/autotest_common.sh@390 -- # return 0 00:24:43.222 05:05:06 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:24:43.222 05:05:06 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:24:43.222 05:05:06 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:43.222 05:05:06 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:43.222 05:05:06 -- common/autotest_common.sh@1682 -- # true 00:24:43.222 05:05:06 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:24:43.222 05:05:06 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:43.222 05:05:06 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:43.222 05:05:06 -- common/autotest_common.sh@27 -- # exec 00:24:43.222 05:05:06 -- common/autotest_common.sh@29 -- # exec 00:24:43.222 05:05:06 -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:43.222 05:05:06 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:43.222 05:05:06 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:43.222 05:05:06 -- common/autotest_common.sh@18 -- # set -x 00:24:43.222 05:05:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:43.222 05:05:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:43.222 05:05:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:43.222 05:05:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:43.222 05:05:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:43.222 05:05:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:43.222 05:05:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:43.222 05:05:06 -- scripts/common.sh@335 -- # IFS=.-: 00:24:43.222 05:05:06 -- scripts/common.sh@335 -- # read -ra ver1 00:24:43.222 05:05:06 -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.222 05:05:06 -- scripts/common.sh@336 -- # read -ra ver2 00:24:43.222 05:05:06 -- scripts/common.sh@337 -- # local 'op=<' 00:24:43.222 05:05:06 -- scripts/common.sh@339 -- # ver1_l=2 00:24:43.222 05:05:06 -- scripts/common.sh@340 -- # ver2_l=1 00:24:43.222 05:05:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:43.222 05:05:06 -- scripts/common.sh@343 -- # case "$op" in 00:24:43.222 05:05:06 -- scripts/common.sh@344 -- # : 1 00:24:43.222 05:05:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:43.222 05:05:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.222 05:05:06 -- scripts/common.sh@364 -- # decimal 1 00:24:43.222 05:05:06 -- scripts/common.sh@352 -- # local d=1 00:24:43.222 05:05:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.222 05:05:06 -- scripts/common.sh@354 -- # echo 1 00:24:43.222 05:05:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:43.222 05:05:06 -- scripts/common.sh@365 -- # decimal 2 00:24:43.222 05:05:06 -- scripts/common.sh@352 -- # local d=2 00:24:43.222 05:05:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.222 05:05:06 -- scripts/common.sh@354 -- # echo 2 00:24:43.222 05:05:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:43.222 05:05:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:43.222 05:05:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:43.222 05:05:06 -- scripts/common.sh@367 -- # return 0 00:24:43.222 05:05:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.222 05:05:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:43.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.222 --rc genhtml_branch_coverage=1 00:24:43.222 --rc genhtml_function_coverage=1 00:24:43.222 --rc genhtml_legend=1 00:24:43.222 --rc geninfo_all_blocks=1 00:24:43.222 --rc geninfo_unexecuted_blocks=1 00:24:43.222 00:24:43.222 ' 00:24:43.222 05:05:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:43.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.222 --rc genhtml_branch_coverage=1 00:24:43.222 --rc genhtml_function_coverage=1 00:24:43.222 --rc genhtml_legend=1 00:24:43.222 --rc geninfo_all_blocks=1 00:24:43.222 --rc geninfo_unexecuted_blocks=1 00:24:43.222 00:24:43.222 ' 00:24:43.222 05:05:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:43.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.222 --rc genhtml_branch_coverage=1 00:24:43.222 --rc genhtml_function_coverage=1 00:24:43.222 --rc genhtml_legend=1 00:24:43.222 --rc geninfo_all_blocks=1 00:24:43.222 --rc geninfo_unexecuted_blocks=1 00:24:43.222 00:24:43.222 ' 00:24:43.222 05:05:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:43.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.222 --rc genhtml_branch_coverage=1 00:24:43.222 --rc genhtml_function_coverage=1 00:24:43.222 --rc genhtml_legend=1 00:24:43.222 --rc geninfo_all_blocks=1 00:24:43.222 --rc geninfo_unexecuted_blocks=1 00:24:43.222 00:24:43.222 ' 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:43.222 05:05:06 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:43.222 05:05:06 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:43.222 05:05:06 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=87525 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:43.222 05:05:06 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 87525 /var/tmp/spdk.sock 00:24:43.222 05:05:06 -- common/autotest_common.sh@829 -- # '[' -z 87525 ']' 00:24:43.222 05:05:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.222 05:05:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.222 05:05:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.222 05:05:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.222 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:24:43.222 [2024-11-18 05:05:06.691437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:43.222 [2024-11-18 05:05:06.691584] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87525 ] 00:24:43.482 [2024-11-18 05:05:06.845066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:43.482 [2024-11-18 05:05:06.997229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.482 [2024-11-18 05:05:06.997307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.482 [2024-11-18 05:05:06.997330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.741 [2024-11-18 05:05:07.207054] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:44.308 05:05:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.308 05:05:07 -- common/autotest_common.sh@862 -- # return 0 00:24:44.308 05:05:07 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:24:44.308 05:05:07 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:44.567 Malloc0 00:24:44.567 Malloc1 00:24:44.567 Malloc2 00:24:44.567 05:05:07 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:24:44.567 05:05:07 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:44.567 05:05:07 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:44.567 05:05:07 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:44.567 5000+0 records in 00:24:44.567 5000+0 records out 00:24:44.567 10240000 bytes (10 MB, 9.8 MiB) copied, 0.017582 s, 582 MB/s 00:24:44.567 05:05:07 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:44.826 AIO0 00:24:44.826 05:05:08 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 87525 00:24:44.826 05:05:08 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 87525 without_thd 00:24:44.826 05:05:08 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=87525 00:24:44.826 05:05:08 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:24:44.826 05:05:08 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:44.826 05:05:08 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:44.826 05:05:08 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:24:44.826 05:05:08 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:44.826 05:05:08 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:24:44.826 05:05:08 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:44.826 05:05:08 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:44.826 05:05:08 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:45.085 05:05:08 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:24:45.085 05:05:08 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:45.085 05:05:08 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:45.085 05:05:08 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:24:45.085 05:05:08 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:45.085 05:05:08 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:24:45.085 05:05:08 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:45.085 05:05:08 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:45.085 05:05:08 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:24:45.344 05:05:08 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:45.344 spdk_thread ids are 1 on reactor0. 00:24:45.344 05:05:08 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:45.344 05:05:08 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:45.344 05:05:08 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87525 0 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87525 0 idle 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@33 -- # local pid=87525 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87525 -w 256 00:24:45.344 05:05:08 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87525 root 20 0 20.1t 149120 29952 S 0.0 1.2 0:00.57 reactor_0' 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@48 -- # echo 87525 root 20 0 20.1t 149120 29952 S 0.0 1.2 0:00.57 reactor_0 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:45.603 05:05:08 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:45.603 05:05:08 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87525 1 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87525 1 idle 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@33 -- # local pid=87525 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:24:45.603 05:05:08 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87525 -w 256 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87529 root 20 0 20.1t 149120 29952 S 0.0 1.2 0:00.00 reactor_1' 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@48 -- # echo 87529 root 20 0 20.1t 149120 29952 S 0.0 1.2 0:00.00 reactor_1 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:45.603 05:05:09 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:45.603 05:05:09 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87525 2 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87525 2 idle 00:24:45.603 05:05:09 -- interrupt/interrupt_common.sh@33 -- # local pid=87525 00:24:45.604 05:05:09 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:45.604 05:05:09 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:45.604 05:05:09 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:45.604 05:05:09 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:45.604 05:05:09 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:45.604 05:05:09 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:45.604 05:05:09 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87525 -w 256 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87530 root 20 0 20.1t 149120 29952 S 0.0 1.2 0:00.00 reactor_2' 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@48 -- # echo 87530 root 20 0 20.1t 149120 29952 S 0.0 1.2 0:00.00 reactor_2 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:45.869 05:05:09 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:45.869 05:05:09 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:24:45.869 05:05:09 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:24:45.869 05:05:09 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:24:46.128 [2024-11-18 05:05:09.567835] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:46.128 05:05:09 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:46.387 [2024-11-18 05:05:09.827595] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:46.387 [2024-11-18 05:05:09.828340] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:46.387 05:05:09 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:46.646 [2024-11-18 05:05:10.095422] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:46.646 [2024-11-18 05:05:10.096153] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:46.646 05:05:10 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:46.646 05:05:10 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87525 0 00:24:46.646 05:05:10 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87525 0 busy 00:24:46.646 05:05:10 -- interrupt/interrupt_common.sh@33 -- # local pid=87525 00:24:46.646 05:05:10 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:46.646 05:05:10 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:46.646 05:05:10 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:46.646 05:05:10 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:46.646 05:05:10 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:46.646 05:05:10 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:46.646 05:05:10 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87525 -w 256 00:24:46.646 05:05:10 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87525 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:01.08 reactor_0' 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@48 -- # echo 87525 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:01.08 reactor_0 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:46.905 05:05:10 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:46.905 05:05:10 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87525 2 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87525 2 busy 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@33 -- # local pid=87525 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:46.905 05:05:10 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87525 -w 256 00:24:47.164 05:05:10 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87530 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:00.45 reactor_2' 00:24:47.164 05:05:10 -- interrupt/interrupt_common.sh@48 -- # echo 87530 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:00.45 reactor_2 00:24:47.164 05:05:10 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:47.164 05:05:10 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:47.164 05:05:10 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:47.164 05:05:10 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:47.164 05:05:10 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:47.164 05:05:10 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:47.164 05:05:10 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:47.164 05:05:10 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:47.164 05:05:10 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:47.424 [2024-11-18 05:05:10.791477] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:47.424 [2024-11-18 05:05:10.792462] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:47.424 05:05:10 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:24:47.424 05:05:10 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 87525 2 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87525 2 idle 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@33 -- # local pid=87525 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87525 -w 256 00:24:47.424 05:05:10 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:47.682 05:05:11 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87530 root 20 0 20.1t 152448 29952 S 0.0 1.2 0:00.69 reactor_2' 00:24:47.682 05:05:11 -- interrupt/interrupt_common.sh@48 -- # echo 87530 root 20 0 20.1t 152448 29952 S 0.0 1.2 0:00.69 reactor_2 00:24:47.682 05:05:11 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:47.682 05:05:11 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:47.682 05:05:11 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:47.682 05:05:11 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:47.682 05:05:11 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:47.682 05:05:11 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:47.682 05:05:11 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:47.682 05:05:11 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:47.682 05:05:11 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:47.943 [2024-11-18 05:05:11.207404] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:47.943 [2024-11-18 05:05:11.208235] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:47.943 05:05:11 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:24:47.943 05:05:11 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:24:47.943 05:05:11 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:24:47.943 [2024-11-18 05:05:11.447819] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:47.943 05:05:11 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 87525 0 00:24:47.943 05:05:11 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87525 0 idle 00:24:47.943 05:05:11 -- interrupt/interrupt_common.sh@33 -- # local pid=87525 00:24:47.943 05:05:11 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87525 -w 256 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87525 root 20 0 20.1t 152576 29952 S 0.0 1.2 0:01.96 reactor_0' 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@48 -- # echo 87525 root 20 0 20.1t 152576 29952 S 0.0 1.2 0:01.96 reactor_0 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:48.203 05:05:11 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:48.203 05:05:11 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:48.203 05:05:11 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:24:48.203 05:05:11 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:24:48.203 05:05:11 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 87525 00:24:48.203 05:05:11 -- common/autotest_common.sh@936 -- # '[' -z 87525 ']' 00:24:48.203 05:05:11 -- common/autotest_common.sh@940 -- # kill -0 87525 00:24:48.203 05:05:11 -- common/autotest_common.sh@941 -- # uname 00:24:48.203 05:05:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:48.203 05:05:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87525 00:24:48.203 killing process with pid 87525 00:24:48.203 05:05:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:48.203 05:05:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:48.203 05:05:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87525' 00:24:48.203 05:05:11 -- common/autotest_common.sh@955 -- # kill 87525 00:24:48.203 05:05:11 -- common/autotest_common.sh@960 -- # wait 87525 00:24:49.582 05:05:12 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:24:49.582 05:05:12 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:49.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.582 05:05:12 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:24:49.582 05:05:12 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.582 05:05:12 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:49.582 05:05:12 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=87668 00:24:49.582 05:05:12 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:49.582 05:05:12 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 87668 /var/tmp/spdk.sock 00:24:49.582 05:05:12 -- common/autotest_common.sh@829 -- # '[' -z 87668 ']' 00:24:49.582 05:05:12 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:49.582 05:05:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.582 05:05:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.582 05:05:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.582 05:05:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.582 05:05:12 -- common/autotest_common.sh@10 -- # set +x 00:24:49.582 [2024-11-18 05:05:12.841208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:49.582 [2024-11-18 05:05:12.841585] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87668 ] 00:24:49.582 [2024-11-18 05:05:13.008501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:49.841 [2024-11-18 05:05:13.158058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.841 [2024-11-18 05:05:13.158172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.841 [2024-11-18 05:05:13.158221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.100 [2024-11-18 05:05:13.368387] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:50.359 05:05:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.359 05:05:13 -- common/autotest_common.sh@862 -- # return 0 00:24:50.359 05:05:13 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:24:50.359 05:05:13 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.618 Malloc0 00:24:50.618 Malloc1 00:24:50.618 Malloc2 00:24:50.618 05:05:14 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:24:50.618 05:05:14 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:50.618 05:05:14 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:50.618 05:05:14 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:50.618 5000+0 records in 00:24:50.618 5000+0 records out 00:24:50.618 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0183598 s, 558 MB/s 00:24:50.877 05:05:14 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:50.877 AIO0 00:24:50.877 05:05:14 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 87668 00:24:50.877 05:05:14 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 87668 00:24:50.877 05:05:14 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=87668 00:24:50.877 05:05:14 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:24:50.877 05:05:14 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:50.877 05:05:14 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:50.877 05:05:14 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:24:50.877 05:05:14 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:50.877 05:05:14 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:24:50.877 05:05:14 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:50.877 05:05:14 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:50.877 05:05:14 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:51.136 05:05:14 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:24:51.136 05:05:14 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:51.136 05:05:14 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:51.136 05:05:14 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:24:51.136 05:05:14 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:51.136 05:05:14 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:24:51.136 05:05:14 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:51.136 05:05:14 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:51.136 05:05:14 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:24:51.396 05:05:14 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:51.396 05:05:14 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:51.396 spdk_thread ids are 1 on reactor0. 00:24:51.396 05:05:14 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:51.396 05:05:14 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87668 0 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87668 0 idle 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@33 -- # local pid=87668 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87668 -w 256 00:24:51.396 05:05:14 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:51.655 05:05:14 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87668 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.58 reactor_0' 00:24:51.655 05:05:14 -- interrupt/interrupt_common.sh@48 -- # echo 87668 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.58 reactor_0 00:24:51.655 05:05:14 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:51.655 05:05:14 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:51.656 05:05:14 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:51.656 05:05:14 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87668 1 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87668 1 idle 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@33 -- # local pid=87668 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87668 -w 256 00:24:51.656 05:05:14 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87672 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_1' 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@48 -- # echo 87672 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_1 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:51.915 05:05:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:51.915 05:05:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87668 2 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87668 2 idle 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@33 -- # local pid=87668 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87668 -w 256 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87673 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_2' 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@48 -- # echo 87673 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_2 00:24:51.915 05:05:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:51.916 05:05:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:51.916 05:05:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:51.916 05:05:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:51.916 05:05:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:51.916 05:05:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:51.916 05:05:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:51.916 05:05:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:51.916 05:05:15 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:24:51.916 05:05:15 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:52.175 [2024-11-18 05:05:15.660688] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:52.175 [2024-11-18 05:05:15.660941] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:24:52.175 [2024-11-18 05:05:15.661899] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:52.175 05:05:15 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:52.434 [2024-11-18 05:05:15.920565] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:52.434 [2024-11-18 05:05:15.921231] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:52.434 05:05:15 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:52.434 05:05:15 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87668 0 00:24:52.434 05:05:15 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87668 0 busy 00:24:52.434 05:05:15 -- interrupt/interrupt_common.sh@33 -- # local pid=87668 00:24:52.434 05:05:15 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:52.434 05:05:15 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:52.434 05:05:15 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:52.434 05:05:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:52.434 05:05:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:52.434 05:05:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:52.434 05:05:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87668 -w 256 00:24:52.434 05:05:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87668 root 20 0 20.1t 152320 30080 R 99.9 1.2 0:01.09 reactor_0' 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@48 -- # echo 87668 root 20 0 20.1t 152320 30080 R 99.9 1.2 0:01.09 reactor_0 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:52.693 05:05:16 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:52.693 05:05:16 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87668 2 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87668 2 busy 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@33 -- # local pid=87668 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87668 -w 256 00:24:52.693 05:05:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:52.953 05:05:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87673 root 20 0 20.1t 152320 30080 R 99.9 1.2 0:00.45 reactor_2' 00:24:52.953 05:05:16 -- interrupt/interrupt_common.sh@48 -- # echo 87673 root 20 0 20.1t 152320 30080 R 99.9 1.2 0:00.45 reactor_2 00:24:52.953 05:05:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:52.953 05:05:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:52.953 05:05:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:52.953 05:05:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:52.953 05:05:16 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:52.953 05:05:16 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:52.953 05:05:16 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:52.953 05:05:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:52.953 05:05:16 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:53.212 [2024-11-18 05:05:16.568814] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:53.212 [2024-11-18 05:05:16.569334] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:53.212 05:05:16 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:24:53.212 05:05:16 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 87668 2 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87668 2 idle 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@33 -- # local pid=87668 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:53.212 05:05:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87668 -w 256 00:24:53.470 05:05:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87673 root 20 0 20.1t 152320 30080 S 0.0 1.2 0:00.64 reactor_2' 00:24:53.470 05:05:16 -- interrupt/interrupt_common.sh@48 -- # echo 87673 root 20 0 20.1t 152320 30080 S 0.0 1.2 0:00.64 reactor_2 00:24:53.470 05:05:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:53.470 05:05:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:53.470 05:05:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:53.470 05:05:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:53.470 05:05:16 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:53.470 05:05:16 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:53.470 05:05:16 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:53.470 05:05:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:53.470 05:05:16 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:53.729 [2024-11-18 05:05:17.052877] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:53.729 [2024-11-18 05:05:17.053883] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:24:53.729 [2024-11-18 05:05:17.053956] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:53.729 05:05:17 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:24:53.729 05:05:17 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 87668 0 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87668 0 idle 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@33 -- # local pid=87668 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:53.729 05:05:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87668 -w 256 00:24:53.988 05:05:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87668 root 20 0 20.1t 152320 30080 S 0.0 1.2 0:01.99 reactor_0' 00:24:53.988 05:05:17 -- interrupt/interrupt_common.sh@48 -- # echo 87668 root 20 0 20.1t 152320 30080 S 0.0 1.2 0:01.99 reactor_0 00:24:53.988 05:05:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:53.988 05:05:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:53.988 05:05:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:53.988 05:05:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:53.988 05:05:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:53.988 05:05:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:53.988 05:05:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:53.988 05:05:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:53.988 05:05:17 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:53.988 05:05:17 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:24:53.988 05:05:17 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:53.988 05:05:17 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 87668 00:24:53.988 05:05:17 -- common/autotest_common.sh@936 -- # '[' -z 87668 ']' 00:24:53.988 05:05:17 -- common/autotest_common.sh@940 -- # kill -0 87668 00:24:53.988 05:05:17 -- common/autotest_common.sh@941 -- # uname 00:24:53.988 05:05:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:53.988 05:05:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87668 00:24:53.988 05:05:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:53.988 05:05:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:53.988 killing process with pid 87668 00:24:53.988 05:05:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87668' 00:24:53.988 05:05:17 -- common/autotest_common.sh@955 -- # kill 87668 00:24:53.988 05:05:17 -- common/autotest_common.sh@960 -- # wait 87668 00:24:54.926 05:05:18 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:24:54.926 05:05:18 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:54.926 00:24:54.926 real 0m12.186s 00:24:54.926 user 0m11.897s 00:24:54.926 sys 0m1.750s 00:24:54.926 05:05:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:54.926 05:05:18 -- common/autotest_common.sh@10 -- # set +x 00:24:54.926 ************************************ 00:24:54.926 END TEST reactor_set_interrupt 00:24:54.926 ************************************ 00:24:55.186 05:05:18 -- spdk/autotest.sh@187 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:55.186 05:05:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:55.186 05:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:55.186 05:05:18 -- common/autotest_common.sh@10 -- # set +x 00:24:55.186 ************************************ 00:24:55.186 START TEST reap_unregistered_poller 00:24:55.186 ************************************ 00:24:55.186 05:05:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:55.186 * Looking for test storage... 00:24:55.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:55.186 05:05:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:55.186 05:05:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:55.186 05:05:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:55.186 05:05:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:55.186 05:05:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:55.186 05:05:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:55.186 05:05:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:55.186 05:05:18 -- scripts/common.sh@335 -- # IFS=.-: 00:24:55.186 05:05:18 -- scripts/common.sh@335 -- # read -ra ver1 00:24:55.186 05:05:18 -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.186 05:05:18 -- scripts/common.sh@336 -- # read -ra ver2 00:24:55.186 05:05:18 -- scripts/common.sh@337 -- # local 'op=<' 00:24:55.186 05:05:18 -- scripts/common.sh@339 -- # ver1_l=2 00:24:55.186 05:05:18 -- scripts/common.sh@340 -- # ver2_l=1 00:24:55.186 05:05:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:55.186 05:05:18 -- scripts/common.sh@343 -- # case "$op" in 00:24:55.186 05:05:18 -- scripts/common.sh@344 -- # : 1 00:24:55.186 05:05:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:55.186 05:05:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.186 05:05:18 -- scripts/common.sh@364 -- # decimal 1 00:24:55.186 05:05:18 -- scripts/common.sh@352 -- # local d=1 00:24:55.186 05:05:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.186 05:05:18 -- scripts/common.sh@354 -- # echo 1 00:24:55.186 05:05:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:55.186 05:05:18 -- scripts/common.sh@365 -- # decimal 2 00:24:55.186 05:05:18 -- scripts/common.sh@352 -- # local d=2 00:24:55.186 05:05:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.186 05:05:18 -- scripts/common.sh@354 -- # echo 2 00:24:55.186 05:05:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:55.186 05:05:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:55.186 05:05:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:55.186 05:05:18 -- scripts/common.sh@367 -- # return 0 00:24:55.186 05:05:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.186 05:05:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:55.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.186 --rc genhtml_branch_coverage=1 00:24:55.186 --rc genhtml_function_coverage=1 00:24:55.186 --rc genhtml_legend=1 00:24:55.186 --rc geninfo_all_blocks=1 00:24:55.186 --rc geninfo_unexecuted_blocks=1 00:24:55.186 00:24:55.186 ' 00:24:55.186 05:05:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:55.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.186 --rc genhtml_branch_coverage=1 00:24:55.186 --rc genhtml_function_coverage=1 00:24:55.186 --rc genhtml_legend=1 00:24:55.186 --rc geninfo_all_blocks=1 00:24:55.186 --rc geninfo_unexecuted_blocks=1 00:24:55.186 00:24:55.186 ' 00:24:55.186 05:05:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:55.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.186 --rc genhtml_branch_coverage=1 00:24:55.186 --rc genhtml_function_coverage=1 00:24:55.186 --rc genhtml_legend=1 00:24:55.186 --rc geninfo_all_blocks=1 00:24:55.186 --rc geninfo_unexecuted_blocks=1 00:24:55.186 00:24:55.186 ' 00:24:55.186 05:05:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:55.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.186 --rc genhtml_branch_coverage=1 00:24:55.186 --rc genhtml_function_coverage=1 00:24:55.186 --rc genhtml_legend=1 00:24:55.186 --rc geninfo_all_blocks=1 00:24:55.186 --rc geninfo_unexecuted_blocks=1 00:24:55.186 00:24:55.186 ' 00:24:55.186 05:05:18 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:24:55.186 05:05:18 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:55.186 05:05:18 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:55.186 05:05:18 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:55.186 05:05:18 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:24:55.186 05:05:18 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:55.186 05:05:18 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:24:55.186 05:05:18 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:55.186 05:05:18 -- common/autotest_common.sh@34 -- # set -e 00:24:55.186 05:05:18 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:55.186 05:05:18 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:55.186 05:05:18 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:55.186 05:05:18 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:55.186 05:05:18 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:55.186 05:05:18 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:24:55.186 05:05:18 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:24:55.186 05:05:18 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:55.186 05:05:18 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:55.186 05:05:18 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:55.186 05:05:18 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:55.186 05:05:18 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:55.186 05:05:18 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:55.186 05:05:18 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:55.186 05:05:18 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:55.186 05:05:18 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:55.186 05:05:18 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:55.186 05:05:18 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:55.186 05:05:18 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:55.186 05:05:18 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:55.186 05:05:18 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:55.186 05:05:18 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:55.186 05:05:18 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:55.186 05:05:18 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:55.187 05:05:18 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:55.187 05:05:18 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:55.187 05:05:18 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:55.187 05:05:18 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:55.187 05:05:18 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:55.187 05:05:18 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:55.187 05:05:18 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:55.187 05:05:18 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:55.187 05:05:18 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:55.187 05:05:18 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:55.187 05:05:18 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:55.187 05:05:18 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:55.187 05:05:18 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:55.187 05:05:18 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:55.187 05:05:18 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:55.187 05:05:18 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:24:55.187 05:05:18 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:24:55.187 05:05:18 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:55.187 05:05:18 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:55.187 05:05:18 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:55.187 05:05:18 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:55.187 05:05:18 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:55.187 05:05:18 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:24:55.187 05:05:18 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:55.187 05:05:18 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:55.187 05:05:18 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:24:55.187 05:05:18 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:24:55.187 05:05:18 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:55.187 05:05:18 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:24:55.187 05:05:18 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:24:55.187 05:05:18 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:24:55.187 05:05:18 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:24:55.187 05:05:18 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:24:55.187 05:05:18 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:24:55.187 05:05:18 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:24:55.187 05:05:18 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:24:55.187 05:05:18 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:24:55.187 05:05:18 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:24:55.187 05:05:18 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:24:55.187 05:05:18 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:24:55.187 05:05:18 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:24:55.187 05:05:18 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:24:55.187 05:05:18 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:24:55.187 05:05:18 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:24:55.187 05:05:18 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:24:55.187 05:05:18 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:55.187 05:05:18 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:24:55.187 05:05:18 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:24:55.187 05:05:18 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:24:55.187 05:05:18 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:24:55.187 05:05:18 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:24:55.187 05:05:18 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:24:55.187 05:05:18 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:24:55.187 05:05:18 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:24:55.187 05:05:18 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:24:55.187 05:05:18 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:24:55.187 05:05:18 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:55.187 05:05:18 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:24:55.187 05:05:18 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:24:55.187 05:05:18 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:55.187 05:05:18 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:55.187 05:05:18 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:24:55.449 05:05:18 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:24:55.449 05:05:18 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:24:55.449 05:05:18 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:24:55.449 05:05:18 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:24:55.449 05:05:18 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:24:55.449 05:05:18 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:55.449 05:05:18 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:55.449 05:05:18 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:55.449 05:05:18 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:55.449 05:05:18 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:55.449 05:05:18 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:55.449 05:05:18 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:24:55.449 05:05:18 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:55.449 #define SPDK_CONFIG_H 00:24:55.449 #define SPDK_CONFIG_APPS 1 00:24:55.449 #define SPDK_CONFIG_ARCH native 00:24:55.449 #define SPDK_CONFIG_ASAN 1 00:24:55.449 #undef SPDK_CONFIG_AVAHI 00:24:55.449 #undef SPDK_CONFIG_CET 00:24:55.449 #define SPDK_CONFIG_COVERAGE 1 00:24:55.449 #define SPDK_CONFIG_CROSS_PREFIX 00:24:55.449 #undef SPDK_CONFIG_CRYPTO 00:24:55.449 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:55.449 #undef SPDK_CONFIG_CUSTOMOCF 00:24:55.449 #undef SPDK_CONFIG_DAOS 00:24:55.449 #define SPDK_CONFIG_DAOS_DIR 00:24:55.449 #define SPDK_CONFIG_DEBUG 1 00:24:55.449 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:55.449 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:24:55.449 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:55.449 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:55.449 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:55.449 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:55.449 #define SPDK_CONFIG_EXAMPLES 1 00:24:55.449 #undef SPDK_CONFIG_FC 00:24:55.449 #define SPDK_CONFIG_FC_PATH 00:24:55.449 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:55.449 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:55.449 #undef SPDK_CONFIG_FUSE 00:24:55.449 #undef SPDK_CONFIG_FUZZER 00:24:55.449 #define SPDK_CONFIG_FUZZER_LIB 00:24:55.449 #undef SPDK_CONFIG_GOLANG 00:24:55.449 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:55.449 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:55.449 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:55.449 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:55.449 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:55.449 #define SPDK_CONFIG_IDXD 1 00:24:55.449 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:55.449 #undef SPDK_CONFIG_IPSEC_MB 00:24:55.449 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:55.449 #define SPDK_CONFIG_ISAL 1 00:24:55.449 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:55.449 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:55.449 #define SPDK_CONFIG_LIBDIR 00:24:55.449 #undef SPDK_CONFIG_LTO 00:24:55.449 #define SPDK_CONFIG_MAX_LCORES 00:24:55.449 #define SPDK_CONFIG_NVME_CUSE 1 00:24:55.449 #undef SPDK_CONFIG_OCF 00:24:55.449 #define SPDK_CONFIG_OCF_PATH 00:24:55.449 #define SPDK_CONFIG_OPENSSL_PATH 00:24:55.449 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:55.449 #undef SPDK_CONFIG_PGO_USE 00:24:55.449 #define SPDK_CONFIG_PREFIX /usr/local 00:24:55.449 #define SPDK_CONFIG_RAID5F 1 00:24:55.449 #undef SPDK_CONFIG_RBD 00:24:55.449 #define SPDK_CONFIG_RDMA 1 00:24:55.449 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:55.449 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:55.449 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:55.449 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:55.449 #undef SPDK_CONFIG_SHARED 00:24:55.449 #undef SPDK_CONFIG_SMA 00:24:55.449 #define SPDK_CONFIG_TESTS 1 00:24:55.449 #undef SPDK_CONFIG_TSAN 00:24:55.449 #define SPDK_CONFIG_UBLK 1 00:24:55.449 #define SPDK_CONFIG_UBSAN 1 00:24:55.449 #define SPDK_CONFIG_UNIT_TESTS 1 00:24:55.449 #undef SPDK_CONFIG_URING 00:24:55.449 #define SPDK_CONFIG_URING_PATH 00:24:55.449 #undef SPDK_CONFIG_URING_ZNS 00:24:55.449 #undef SPDK_CONFIG_USDT 00:24:55.449 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:55.449 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:55.449 #undef SPDK_CONFIG_VFIO_USER 00:24:55.449 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:55.449 #define SPDK_CONFIG_VHOST 1 00:24:55.449 #define SPDK_CONFIG_VIRTIO 1 00:24:55.449 #undef SPDK_CONFIG_VTUNE 00:24:55.449 #define SPDK_CONFIG_VTUNE_DIR 00:24:55.449 #define SPDK_CONFIG_WERROR 1 00:24:55.449 #define SPDK_CONFIG_WPDK_DIR 00:24:55.449 #undef SPDK_CONFIG_XNVME 00:24:55.449 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:55.449 05:05:18 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:55.449 05:05:18 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.449 05:05:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.449 05:05:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.449 05:05:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.449 05:05:18 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.449 05:05:18 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.449 05:05:18 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.449 05:05:18 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.449 05:05:18 -- paths/export.sh@6 -- # export PATH 00:24:55.449 05:05:18 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.449 05:05:18 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:55.449 05:05:18 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:55.449 05:05:18 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:55.449 05:05:18 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:55.449 05:05:18 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:24:55.449 05:05:18 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:24:55.449 05:05:18 -- pm/common@16 -- # TEST_TAG=N/A 00:24:55.449 05:05:18 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:24:55.449 05:05:18 -- common/autotest_common.sh@52 -- # : 1 00:24:55.449 05:05:18 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:24:55.450 05:05:18 -- common/autotest_common.sh@56 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:55.450 05:05:18 -- common/autotest_common.sh@58 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:24:55.450 05:05:18 -- common/autotest_common.sh@60 -- # : 1 00:24:55.450 05:05:18 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:55.450 05:05:18 -- common/autotest_common.sh@62 -- # : 1 00:24:55.450 05:05:18 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:24:55.450 05:05:18 -- common/autotest_common.sh@64 -- # : 00:24:55.450 05:05:18 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:24:55.450 05:05:18 -- common/autotest_common.sh@66 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:24:55.450 05:05:18 -- common/autotest_common.sh@68 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:24:55.450 05:05:18 -- common/autotest_common.sh@70 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:24:55.450 05:05:18 -- common/autotest_common.sh@72 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:55.450 05:05:18 -- common/autotest_common.sh@74 -- # : 1 00:24:55.450 05:05:18 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:24:55.450 05:05:18 -- common/autotest_common.sh@76 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:24:55.450 05:05:18 -- common/autotest_common.sh@78 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:24:55.450 05:05:18 -- common/autotest_common.sh@80 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:24:55.450 05:05:18 -- common/autotest_common.sh@82 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:24:55.450 05:05:18 -- common/autotest_common.sh@84 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:24:55.450 05:05:18 -- common/autotest_common.sh@86 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:24:55.450 05:05:18 -- common/autotest_common.sh@88 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:24:55.450 05:05:18 -- common/autotest_common.sh@90 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:55.450 05:05:18 -- common/autotest_common.sh@92 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:24:55.450 05:05:18 -- common/autotest_common.sh@94 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:24:55.450 05:05:18 -- common/autotest_common.sh@96 -- # : rdma 00:24:55.450 05:05:18 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:55.450 05:05:18 -- common/autotest_common.sh@98 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:24:55.450 05:05:18 -- common/autotest_common.sh@100 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:24:55.450 05:05:18 -- common/autotest_common.sh@102 -- # : 1 00:24:55.450 05:05:18 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:24:55.450 05:05:18 -- common/autotest_common.sh@104 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:24:55.450 05:05:18 -- common/autotest_common.sh@106 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:24:55.450 05:05:18 -- common/autotest_common.sh@108 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:24:55.450 05:05:18 -- common/autotest_common.sh@110 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:24:55.450 05:05:18 -- common/autotest_common.sh@112 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:55.450 05:05:18 -- common/autotest_common.sh@114 -- # : 1 00:24:55.450 05:05:18 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:24:55.450 05:05:18 -- common/autotest_common.sh@116 -- # : 1 00:24:55.450 05:05:18 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:24:55.450 05:05:18 -- common/autotest_common.sh@118 -- # : 00:24:55.450 05:05:18 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:55.450 05:05:18 -- common/autotest_common.sh@120 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:24:55.450 05:05:18 -- common/autotest_common.sh@122 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:24:55.450 05:05:18 -- common/autotest_common.sh@124 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:24:55.450 05:05:18 -- common/autotest_common.sh@126 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:24:55.450 05:05:18 -- common/autotest_common.sh@128 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:24:55.450 05:05:18 -- common/autotest_common.sh@130 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:24:55.450 05:05:18 -- common/autotest_common.sh@132 -- # : 00:24:55.450 05:05:18 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:24:55.450 05:05:18 -- common/autotest_common.sh@134 -- # : true 00:24:55.450 05:05:18 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:24:55.450 05:05:18 -- common/autotest_common.sh@136 -- # : 1 00:24:55.450 05:05:18 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:24:55.450 05:05:18 -- common/autotest_common.sh@138 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:24:55.450 05:05:18 -- common/autotest_common.sh@140 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:24:55.450 05:05:18 -- common/autotest_common.sh@142 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:24:55.450 05:05:18 -- common/autotest_common.sh@144 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:24:55.450 05:05:18 -- common/autotest_common.sh@146 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:24:55.450 05:05:18 -- common/autotest_common.sh@148 -- # : 00:24:55.450 05:05:18 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:24:55.450 05:05:18 -- common/autotest_common.sh@150 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:24:55.450 05:05:18 -- common/autotest_common.sh@152 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:24:55.450 05:05:18 -- common/autotest_common.sh@154 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:24:55.450 05:05:18 -- common/autotest_common.sh@156 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:24:55.450 05:05:18 -- common/autotest_common.sh@158 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:24:55.450 05:05:18 -- common/autotest_common.sh@160 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:24:55.450 05:05:18 -- common/autotest_common.sh@163 -- # : 00:24:55.450 05:05:18 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:24:55.450 05:05:18 -- common/autotest_common.sh@165 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:24:55.450 05:05:18 -- common/autotest_common.sh@167 -- # : 0 00:24:55.450 05:05:18 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:55.450 05:05:18 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:55.450 05:05:18 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:55.450 05:05:18 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:55.450 05:05:18 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:55.450 05:05:18 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:55.450 05:05:18 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:55.450 05:05:18 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:55.450 05:05:18 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:55.450 05:05:18 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:55.450 05:05:18 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:55.450 05:05:18 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:55.450 05:05:18 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:55.450 05:05:18 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:55.450 05:05:18 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:24:55.450 05:05:18 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:55.450 05:05:18 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:55.450 05:05:18 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:55.450 05:05:18 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:55.450 05:05:18 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:55.450 05:05:18 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:24:55.451 05:05:18 -- common/autotest_common.sh@196 -- # cat 00:24:55.451 05:05:18 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:24:55.451 05:05:18 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:55.451 05:05:18 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:55.451 05:05:18 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:55.451 05:05:18 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:55.451 05:05:18 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:24:55.451 05:05:18 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:24:55.451 05:05:18 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:55.451 05:05:18 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:55.451 05:05:18 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:55.451 05:05:18 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:55.451 05:05:18 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:24:55.451 05:05:18 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:24:55.451 05:05:18 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:55.451 05:05:18 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:55.451 05:05:18 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:55.451 05:05:18 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:55.451 05:05:18 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:55.451 05:05:18 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:55.451 05:05:18 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:24:55.451 05:05:18 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:24:55.451 05:05:18 -- common/autotest_common.sh@249 -- # _LCOV= 00:24:55.451 05:05:18 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:24:55.451 05:05:18 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:24:55.451 05:05:18 -- common/autotest_common.sh@255 -- # lcov_opt= 00:24:55.451 05:05:18 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:24:55.451 05:05:18 -- common/autotest_common.sh@259 -- # export valgrind= 00:24:55.451 05:05:18 -- common/autotest_common.sh@259 -- # valgrind= 00:24:55.451 05:05:18 -- common/autotest_common.sh@265 -- # uname -s 00:24:55.451 05:05:18 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:24:55.451 05:05:18 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:24:55.451 05:05:18 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:24:55.451 05:05:18 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:24:55.451 05:05:18 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@275 -- # MAKE=make 00:24:55.451 05:05:18 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:24:55.451 05:05:18 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:24:55.451 05:05:18 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:24:55.451 05:05:18 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:24:55.451 05:05:18 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:24:55.451 05:05:18 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:24:55.451 05:05:18 -- common/autotest_common.sh@319 -- # [[ -z 87829 ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@319 -- # kill -0 87829 00:24:55.451 05:05:18 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:24:55.451 05:05:18 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:24:55.451 05:05:18 -- common/autotest_common.sh@332 -- # local mount target_dir 00:24:55.451 05:05:18 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:24:55.451 05:05:18 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:24:55.451 05:05:18 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:24:55.451 05:05:18 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:24:55.451 05:05:18 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.zqxkBV 00:24:55.451 05:05:18 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:55.451 05:05:18 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.zqxkBV/tests/interrupt /tmp/spdk.zqxkBV 00:24:55.451 05:05:18 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:24:55.451 05:05:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:55.451 05:05:18 -- common/autotest_common.sh@328 -- # df -T 00:24:55.451 05:05:18 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=1249312768 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254027264 00:24:55.451 05:05:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=4714496 00:24:55.451 05:05:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=10279731200 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=19681529856 00:24:55.451 05:05:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=9385021440 00:24:55.451 05:05:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=6267523072 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6270115840 00:24:55.451 05:05:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=2592768 00:24:55.451 05:05:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:24:55.451 05:05:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:24:55.451 05:05:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda16 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=777306112 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=923156480 00:24:55.451 05:05:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=81207296 00:24:55.451 05:05:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=103000064 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:24:55.451 05:05:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=6395904 00:24:55.451 05:05:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=1254010880 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254023168 00:24:55.451 05:05:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:24:55.451 05:05:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:24:55.451 05:05:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=98690932736 00:24:55.451 05:05:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:24:55.451 05:05:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=1011847168 00:24:55.451 05:05:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:55.451 05:05:18 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:24:55.451 * Looking for test storage... 00:24:55.451 05:05:18 -- common/autotest_common.sh@369 -- # local target_space new_size 00:24:55.451 05:05:18 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:24:55.451 05:05:18 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:55.451 05:05:18 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:55.451 05:05:18 -- common/autotest_common.sh@373 -- # mount=/ 00:24:55.451 05:05:18 -- common/autotest_common.sh@375 -- # target_space=10279731200 00:24:55.451 05:05:18 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:24:55.451 05:05:18 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:24:55.451 05:05:18 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:24:55.451 05:05:18 -- common/autotest_common.sh@382 -- # new_size=11599613952 00:24:55.451 05:05:18 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:55.451 05:05:18 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:55.451 05:05:18 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:55.451 05:05:18 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:55.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:55.451 05:05:18 -- common/autotest_common.sh@390 -- # return 0 00:24:55.452 05:05:18 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:24:55.452 05:05:18 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:24:55.452 05:05:18 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:55.452 05:05:18 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:55.452 05:05:18 -- common/autotest_common.sh@1682 -- # true 00:24:55.452 05:05:18 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:24:55.452 05:05:18 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:55.452 05:05:18 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:55.452 05:05:18 -- common/autotest_common.sh@27 -- # exec 00:24:55.452 05:05:18 -- common/autotest_common.sh@29 -- # exec 00:24:55.452 05:05:18 -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:55.452 05:05:18 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:55.452 05:05:18 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:55.452 05:05:18 -- common/autotest_common.sh@18 -- # set -x 00:24:55.452 05:05:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:55.452 05:05:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:55.452 05:05:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:55.452 05:05:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:55.452 05:05:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:55.452 05:05:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:55.452 05:05:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:55.452 05:05:18 -- scripts/common.sh@335 -- # IFS=.-: 00:24:55.452 05:05:18 -- scripts/common.sh@335 -- # read -ra ver1 00:24:55.452 05:05:18 -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.452 05:05:18 -- scripts/common.sh@336 -- # read -ra ver2 00:24:55.452 05:05:18 -- scripts/common.sh@337 -- # local 'op=<' 00:24:55.452 05:05:18 -- scripts/common.sh@339 -- # ver1_l=2 00:24:55.452 05:05:18 -- scripts/common.sh@340 -- # ver2_l=1 00:24:55.452 05:05:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:55.452 05:05:18 -- scripts/common.sh@343 -- # case "$op" in 00:24:55.452 05:05:18 -- scripts/common.sh@344 -- # : 1 00:24:55.452 05:05:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:55.452 05:05:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.452 05:05:18 -- scripts/common.sh@364 -- # decimal 1 00:24:55.452 05:05:18 -- scripts/common.sh@352 -- # local d=1 00:24:55.452 05:05:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.452 05:05:18 -- scripts/common.sh@354 -- # echo 1 00:24:55.452 05:05:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:55.452 05:05:18 -- scripts/common.sh@365 -- # decimal 2 00:24:55.452 05:05:18 -- scripts/common.sh@352 -- # local d=2 00:24:55.452 05:05:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.452 05:05:18 -- scripts/common.sh@354 -- # echo 2 00:24:55.452 05:05:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:55.452 05:05:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:55.452 05:05:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:55.452 05:05:18 -- scripts/common.sh@367 -- # return 0 00:24:55.452 05:05:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.452 05:05:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:55.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.452 --rc genhtml_branch_coverage=1 00:24:55.452 --rc genhtml_function_coverage=1 00:24:55.452 --rc genhtml_legend=1 00:24:55.452 --rc geninfo_all_blocks=1 00:24:55.452 --rc geninfo_unexecuted_blocks=1 00:24:55.452 00:24:55.452 ' 00:24:55.452 05:05:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:55.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.452 --rc genhtml_branch_coverage=1 00:24:55.452 --rc genhtml_function_coverage=1 00:24:55.452 --rc genhtml_legend=1 00:24:55.452 --rc geninfo_all_blocks=1 00:24:55.452 --rc geninfo_unexecuted_blocks=1 00:24:55.452 00:24:55.452 ' 00:24:55.452 05:05:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:55.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.452 --rc genhtml_branch_coverage=1 00:24:55.452 --rc genhtml_function_coverage=1 00:24:55.452 --rc genhtml_legend=1 00:24:55.452 --rc geninfo_all_blocks=1 00:24:55.452 --rc geninfo_unexecuted_blocks=1 00:24:55.452 00:24:55.452 ' 00:24:55.452 05:05:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:55.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.452 --rc genhtml_branch_coverage=1 00:24:55.452 --rc genhtml_function_coverage=1 00:24:55.452 --rc genhtml_legend=1 00:24:55.452 --rc geninfo_all_blocks=1 00:24:55.452 --rc geninfo_unexecuted_blocks=1 00:24:55.452 00:24:55.452 ' 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:55.452 05:05:18 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:55.452 05:05:18 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:55.452 05:05:18 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:55.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=87894 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:55.452 05:05:18 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 87894 /var/tmp/spdk.sock 00:24:55.452 05:05:18 -- common/autotest_common.sh@829 -- # '[' -z 87894 ']' 00:24:55.452 05:05:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.452 05:05:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.452 05:05:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.452 05:05:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.452 05:05:18 -- common/autotest_common.sh@10 -- # set +x 00:24:55.452 [2024-11-18 05:05:18.956750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:55.452 [2024-11-18 05:05:18.956918] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87894 ] 00:24:55.711 [2024-11-18 05:05:19.128022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:55.970 [2024-11-18 05:05:19.280761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.970 [2024-11-18 05:05:19.280870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.970 [2024-11-18 05:05:19.280892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.970 [2024-11-18 05:05:19.491172] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:56.573 05:05:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.573 05:05:19 -- common/autotest_common.sh@862 -- # return 0 00:24:56.573 05:05:19 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:24:56.573 05:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.573 05:05:19 -- common/autotest_common.sh@10 -- # set +x 00:24:56.573 05:05:19 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:24:56.573 05:05:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.573 05:05:19 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:24:56.573 "name": "app_thread", 00:24:56.573 "id": 1, 00:24:56.573 "active_pollers": [], 00:24:56.573 "timed_pollers": [ 00:24:56.573 { 00:24:56.573 "name": "rpc_subsystem_poll", 00:24:56.573 "id": 1, 00:24:56.573 "state": "waiting", 00:24:56.573 "run_count": 0, 00:24:56.573 "busy_count": 0, 00:24:56.573 "period_ticks": 8800000 00:24:56.573 } 00:24:56.573 ], 00:24:56.573 "paused_pollers": [] 00:24:56.573 }' 00:24:56.573 05:05:19 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:24:56.573 05:05:19 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:24:56.573 05:05:19 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:24:56.573 05:05:19 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:24:56.573 05:05:19 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:24:56.573 05:05:19 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:24:56.573 05:05:19 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:56.573 05:05:19 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:56.573 05:05:19 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:56.573 5000+0 records in 00:24:56.573 5000+0 records out 00:24:56.573 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0186802 s, 548 MB/s 00:24:56.573 05:05:19 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:56.845 AIO0 00:24:56.845 05:05:20 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:24:57.105 05:05:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:24:57.105 05:05:20 -- common/autotest_common.sh@10 -- # set +x 00:24:57.105 05:05:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:24:57.105 "name": "app_thread", 00:24:57.105 "id": 1, 00:24:57.105 "active_pollers": [], 00:24:57.105 "timed_pollers": [ 00:24:57.105 { 00:24:57.105 "name": "rpc_subsystem_poll", 00:24:57.105 "id": 1, 00:24:57.105 "state": "waiting", 00:24:57.105 "run_count": 0, 00:24:57.105 "busy_count": 0, 00:24:57.105 "period_ticks": 8800000 00:24:57.105 } 00:24:57.105 ], 00:24:57.105 "paused_pollers": [] 00:24:57.105 }' 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:57.105 05:05:20 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 87894 00:24:57.105 05:05:20 -- common/autotest_common.sh@936 -- # '[' -z 87894 ']' 00:24:57.105 05:05:20 -- common/autotest_common.sh@940 -- # kill -0 87894 00:24:57.105 05:05:20 -- common/autotest_common.sh@941 -- # uname 00:24:57.105 05:05:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.105 05:05:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87894 00:24:57.105 killing process with pid 87894 00:24:57.105 05:05:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:57.105 05:05:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:57.105 05:05:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87894' 00:24:57.105 05:05:20 -- common/autotest_common.sh@955 -- # kill 87894 00:24:57.105 05:05:20 -- common/autotest_common.sh@960 -- # wait 87894 00:24:58.483 05:05:21 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:24:58.483 05:05:21 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:58.483 ************************************ 00:24:58.483 END TEST reap_unregistered_poller 00:24:58.483 ************************************ 00:24:58.483 00:24:58.483 real 0m3.152s 00:24:58.483 user 0m2.490s 00:24:58.483 sys 0m0.566s 00:24:58.483 05:05:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:58.483 05:05:21 -- common/autotest_common.sh@10 -- # set +x 00:24:58.483 05:05:21 -- spdk/autotest.sh@191 -- # uname -s 00:24:58.483 05:05:21 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:24:58.483 05:05:21 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:24:58.483 05:05:21 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:24:58.483 05:05:21 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:24:58.483 05:05:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:58.483 05:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:58.483 05:05:21 -- common/autotest_common.sh@10 -- # set +x 00:24:58.483 ************************************ 00:24:58.483 START TEST spdk_dd 00:24:58.483 ************************************ 00:24:58.483 05:05:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:24:58.483 * Looking for test storage... 00:24:58.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:58.483 05:05:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:58.483 05:05:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:58.483 05:05:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:58.483 05:05:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:58.483 05:05:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:58.483 05:05:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:58.483 05:05:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:58.483 05:05:21 -- scripts/common.sh@335 -- # IFS=.-: 00:24:58.483 05:05:21 -- scripts/common.sh@335 -- # read -ra ver1 00:24:58.484 05:05:21 -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.484 05:05:21 -- scripts/common.sh@336 -- # read -ra ver2 00:24:58.484 05:05:21 -- scripts/common.sh@337 -- # local 'op=<' 00:24:58.484 05:05:21 -- scripts/common.sh@339 -- # ver1_l=2 00:24:58.484 05:05:21 -- scripts/common.sh@340 -- # ver2_l=1 00:24:58.484 05:05:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:58.484 05:05:21 -- scripts/common.sh@343 -- # case "$op" in 00:24:58.484 05:05:21 -- scripts/common.sh@344 -- # : 1 00:24:58.484 05:05:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:58.484 05:05:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.484 05:05:21 -- scripts/common.sh@364 -- # decimal 1 00:24:58.484 05:05:21 -- scripts/common.sh@352 -- # local d=1 00:24:58.484 05:05:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.484 05:05:21 -- scripts/common.sh@354 -- # echo 1 00:24:58.484 05:05:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:58.484 05:05:21 -- scripts/common.sh@365 -- # decimal 2 00:24:58.484 05:05:21 -- scripts/common.sh@352 -- # local d=2 00:24:58.484 05:05:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.484 05:05:21 -- scripts/common.sh@354 -- # echo 2 00:24:58.484 05:05:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:58.484 05:05:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:58.484 05:05:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:58.484 05:05:21 -- scripts/common.sh@367 -- # return 0 00:24:58.484 05:05:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.484 05:05:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:58.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.484 --rc genhtml_branch_coverage=1 00:24:58.484 --rc genhtml_function_coverage=1 00:24:58.484 --rc genhtml_legend=1 00:24:58.484 --rc geninfo_all_blocks=1 00:24:58.484 --rc geninfo_unexecuted_blocks=1 00:24:58.484 00:24:58.484 ' 00:24:58.484 05:05:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:58.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.484 --rc genhtml_branch_coverage=1 00:24:58.484 --rc genhtml_function_coverage=1 00:24:58.484 --rc genhtml_legend=1 00:24:58.484 --rc geninfo_all_blocks=1 00:24:58.484 --rc geninfo_unexecuted_blocks=1 00:24:58.484 00:24:58.484 ' 00:24:58.484 05:05:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:58.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.484 --rc genhtml_branch_coverage=1 00:24:58.484 --rc genhtml_function_coverage=1 00:24:58.484 --rc genhtml_legend=1 00:24:58.484 --rc geninfo_all_blocks=1 00:24:58.484 --rc geninfo_unexecuted_blocks=1 00:24:58.484 00:24:58.484 ' 00:24:58.484 05:05:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:58.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.484 --rc genhtml_branch_coverage=1 00:24:58.484 --rc genhtml_function_coverage=1 00:24:58.484 --rc genhtml_legend=1 00:24:58.484 --rc geninfo_all_blocks=1 00:24:58.484 --rc geninfo_unexecuted_blocks=1 00:24:58.484 00:24:58.484 ' 00:24:58.484 05:05:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:58.484 05:05:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.484 05:05:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.484 05:05:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.484 05:05:21 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:58.484 05:05:21 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:58.484 05:05:21 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:58.484 05:05:21 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:58.484 05:05:21 -- paths/export.sh@6 -- # export PATH 00:24:58.484 05:05:21 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:58.484 05:05:21 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:58.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:24:58.743 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:59.314 05:05:22 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:24:59.314 05:05:22 -- dd/dd.sh@11 -- # nvme_in_userspace 00:24:59.314 05:05:22 -- scripts/common.sh@311 -- # local bdf bdfs 00:24:59.314 05:05:22 -- scripts/common.sh@312 -- # local nvmes 00:24:59.314 05:05:22 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:24:59.314 05:05:22 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:59.314 05:05:22 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:24:59.314 05:05:22 -- scripts/common.sh@297 -- # local bdf= 00:24:59.314 05:05:22 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:24:59.314 05:05:22 -- scripts/common.sh@232 -- # local class 00:24:59.314 05:05:22 -- scripts/common.sh@233 -- # local subclass 00:24:59.314 05:05:22 -- scripts/common.sh@234 -- # local progif 00:24:59.314 05:05:22 -- scripts/common.sh@235 -- # printf %02x 1 00:24:59.314 05:05:22 -- scripts/common.sh@235 -- # class=01 00:24:59.314 05:05:22 -- scripts/common.sh@236 -- # printf %02x 8 00:24:59.314 05:05:22 -- scripts/common.sh@236 -- # subclass=08 00:24:59.314 05:05:22 -- scripts/common.sh@237 -- # printf %02x 2 00:24:59.314 05:05:22 -- scripts/common.sh@237 -- # progif=02 00:24:59.314 05:05:22 -- scripts/common.sh@239 -- # hash lspci 00:24:59.314 05:05:22 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:24:59.314 05:05:22 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:24:59.314 05:05:22 -- scripts/common.sh@242 -- # grep -i -- -p02 00:24:59.314 05:05:22 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:59.314 05:05:22 -- scripts/common.sh@244 -- # tr -d '"' 00:24:59.314 05:05:22 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:59.314 05:05:22 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:24:59.314 05:05:22 -- scripts/common.sh@15 -- # local i 00:24:59.314 05:05:22 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:24:59.314 05:05:22 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:59.314 05:05:22 -- scripts/common.sh@24 -- # return 0 00:24:59.314 05:05:22 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:24:59.314 05:05:22 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:24:59.314 05:05:22 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:24:59.314 05:05:22 -- scripts/common.sh@322 -- # uname -s 00:24:59.314 05:05:22 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:24:59.314 05:05:22 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:24:59.314 05:05:22 -- scripts/common.sh@327 -- # (( 1 )) 00:24:59.314 05:05:22 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:24:59.314 05:05:22 -- dd/dd.sh@13 -- # check_liburing 00:24:59.314 05:05:22 -- dd/common.sh@139 -- # local lib so 00:24:59.314 05:05:22 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:24:59.314 05:05:22 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:59.314 05:05:22 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:24:59.314 05:05:22 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:59.314 05:05:22 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:24:59.314 05:05:22 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:59.314 05:05:22 -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:24:59.314 05:05:22 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:59.314 05:05:22 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:24:59.314 05:05:22 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:59.314 05:05:22 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:24:59.314 05:05:22 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:59.314 05:05:22 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:24:59.314 05:05:22 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:59.314 05:05:22 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:24:59.314 05:05:22 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:24:59.314 * spdk_dd linked to liburing 00:24:59.314 05:05:22 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:59.314 05:05:22 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:59.314 05:05:22 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:59.314 05:05:22 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:24:59.314 05:05:22 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:24:59.314 05:05:22 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:59.314 05:05:22 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:59.314 05:05:22 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:59.314 05:05:22 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:59.314 05:05:22 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:59.314 05:05:22 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:59.314 05:05:22 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:59.314 05:05:22 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:59.314 05:05:22 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:59.314 05:05:22 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:59.314 05:05:22 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:59.314 05:05:22 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:59.314 05:05:22 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:59.314 05:05:22 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:59.314 05:05:22 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:59.314 05:05:22 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:59.314 05:05:22 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:59.314 05:05:22 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:59.314 05:05:22 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:59.314 05:05:22 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:59.314 05:05:22 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:59.314 05:05:22 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:59.314 05:05:22 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:59.314 05:05:22 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:59.314 05:05:22 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:59.314 05:05:22 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:59.314 05:05:22 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:59.314 05:05:22 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:59.314 05:05:22 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:59.314 05:05:22 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:59.314 05:05:22 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:59.314 05:05:22 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:59.314 05:05:22 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:24:59.314 05:05:22 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:24:59.314 05:05:22 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:59.314 05:05:22 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:59.314 05:05:22 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:59.314 05:05:22 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:59.314 05:05:22 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:59.314 05:05:22 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:24:59.314 05:05:22 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:59.314 05:05:22 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:59.314 05:05:22 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:24:59.314 05:05:22 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:24:59.314 05:05:22 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:59.314 05:05:22 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:24:59.314 05:05:22 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:24:59.314 05:05:22 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:24:59.314 05:05:22 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:24:59.314 05:05:22 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:24:59.314 05:05:22 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:24:59.314 05:05:22 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:24:59.314 05:05:22 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:24:59.314 05:05:22 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:24:59.314 05:05:22 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:24:59.314 05:05:22 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:24:59.314 05:05:22 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:24:59.314 05:05:22 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:24:59.314 05:05:22 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:24:59.314 05:05:22 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:24:59.314 05:05:22 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:24:59.314 05:05:22 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:24:59.314 05:05:22 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:59.314 05:05:22 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:24:59.314 05:05:22 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:24:59.314 05:05:22 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:24:59.314 05:05:22 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:24:59.314 05:05:22 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:24:59.314 05:05:22 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:24:59.314 05:05:22 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:24:59.314 05:05:22 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:24:59.314 05:05:22 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:24:59.314 05:05:22 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:24:59.314 05:05:22 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:59.314 05:05:22 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:24:59.314 05:05:22 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:24:59.314 05:05:22 -- dd/common.sh@149 -- # [[ n != y ]] 00:24:59.314 05:05:22 -- dd/common.sh@150 -- # printf '* spdk_dd built with liburing, but no liburing support requested?\n' 00:24:59.314 * spdk_dd built with liburing, but no liburing support requested? 00:24:59.314 05:05:22 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:24:59.314 05:05:22 -- dd/common.sh@156 -- # export liburing_in_use=1 00:24:59.314 05:05:22 -- dd/common.sh@156 -- # liburing_in_use=1 00:24:59.314 05:05:22 -- dd/common.sh@157 -- # return 0 00:24:59.315 05:05:22 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:24:59.315 05:05:22 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:24:59.315 05:05:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:59.315 05:05:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:59.315 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:24:59.574 ************************************ 00:24:59.574 START TEST spdk_dd_basic_rw 00:24:59.574 ************************************ 00:24:59.574 05:05:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:24:59.574 * Looking for test storage... 00:24:59.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:59.574 05:05:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:59.574 05:05:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:59.574 05:05:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:59.574 05:05:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:59.574 05:05:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:59.574 05:05:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:59.574 05:05:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:59.574 05:05:22 -- scripts/common.sh@335 -- # IFS=.-: 00:24:59.574 05:05:22 -- scripts/common.sh@335 -- # read -ra ver1 00:24:59.574 05:05:22 -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.574 05:05:22 -- scripts/common.sh@336 -- # read -ra ver2 00:24:59.574 05:05:22 -- scripts/common.sh@337 -- # local 'op=<' 00:24:59.574 05:05:22 -- scripts/common.sh@339 -- # ver1_l=2 00:24:59.574 05:05:22 -- scripts/common.sh@340 -- # ver2_l=1 00:24:59.574 05:05:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:59.574 05:05:22 -- scripts/common.sh@343 -- # case "$op" in 00:24:59.574 05:05:22 -- scripts/common.sh@344 -- # : 1 00:24:59.574 05:05:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:59.574 05:05:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.574 05:05:22 -- scripts/common.sh@364 -- # decimal 1 00:24:59.574 05:05:23 -- scripts/common.sh@352 -- # local d=1 00:24:59.574 05:05:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.574 05:05:23 -- scripts/common.sh@354 -- # echo 1 00:24:59.574 05:05:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:59.574 05:05:23 -- scripts/common.sh@365 -- # decimal 2 00:24:59.574 05:05:23 -- scripts/common.sh@352 -- # local d=2 00:24:59.574 05:05:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.574 05:05:23 -- scripts/common.sh@354 -- # echo 2 00:24:59.574 05:05:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:59.574 05:05:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:59.574 05:05:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:59.574 05:05:23 -- scripts/common.sh@367 -- # return 0 00:24:59.574 05:05:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.574 05:05:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:59.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.574 --rc genhtml_branch_coverage=1 00:24:59.574 --rc genhtml_function_coverage=1 00:24:59.574 --rc genhtml_legend=1 00:24:59.574 --rc geninfo_all_blocks=1 00:24:59.574 --rc geninfo_unexecuted_blocks=1 00:24:59.574 00:24:59.574 ' 00:24:59.574 05:05:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:59.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.574 --rc genhtml_branch_coverage=1 00:24:59.574 --rc genhtml_function_coverage=1 00:24:59.574 --rc genhtml_legend=1 00:24:59.574 --rc geninfo_all_blocks=1 00:24:59.574 --rc geninfo_unexecuted_blocks=1 00:24:59.574 00:24:59.574 ' 00:24:59.574 05:05:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:59.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.575 --rc genhtml_branch_coverage=1 00:24:59.575 --rc genhtml_function_coverage=1 00:24:59.575 --rc genhtml_legend=1 00:24:59.575 --rc geninfo_all_blocks=1 00:24:59.575 --rc geninfo_unexecuted_blocks=1 00:24:59.575 00:24:59.575 ' 00:24:59.575 05:05:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:59.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.575 --rc genhtml_branch_coverage=1 00:24:59.575 --rc genhtml_function_coverage=1 00:24:59.575 --rc genhtml_legend=1 00:24:59.575 --rc geninfo_all_blocks=1 00:24:59.575 --rc geninfo_unexecuted_blocks=1 00:24:59.575 00:24:59.575 ' 00:24:59.575 05:05:23 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:59.575 05:05:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.575 05:05:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.575 05:05:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.575 05:05:23 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:59.575 05:05:23 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:59.575 05:05:23 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:59.575 05:05:23 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:59.575 05:05:23 -- paths/export.sh@6 -- # export PATH 00:24:59.575 05:05:23 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:59.575 05:05:23 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:24:59.575 05:05:23 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:24:59.575 05:05:23 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:24:59.575 05:05:23 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:24:59.575 05:05:23 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:24:59.575 05:05:23 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:24:59.575 05:05:23 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:24:59.575 05:05:23 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:59.575 05:05:23 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:59.575 05:05:23 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:24:59.575 05:05:23 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:24:59.575 05:05:23 -- dd/common.sh@126 -- # mapfile -t id 00:24:59.575 05:05:23 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:24:59.837 05:05:23 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2245 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:24:59.837 05:05:23 -- dd/common.sh@130 -- # lbaf=04 00:24:59.838 05:05:23 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2245 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:24:59.838 05:05:23 -- dd/common.sh@132 -- # lbaf=4096 00:24:59.838 05:05:23 -- dd/common.sh@134 -- # echo 4096 00:24:59.838 05:05:23 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:24:59.838 05:05:23 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:59.838 05:05:23 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:24:59.838 05:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:59.838 05:05:23 -- dd/basic_rw.sh@96 -- # : 00:24:59.838 05:05:23 -- common/autotest_common.sh@10 -- # set +x 00:24:59.838 05:05:23 -- dd/basic_rw.sh@96 -- # gen_conf 00:24:59.838 05:05:23 -- dd/common.sh@31 -- # xtrace_disable 00:24:59.838 05:05:23 -- common/autotest_common.sh@10 -- # set +x 00:24:59.838 ************************************ 00:24:59.838 START TEST dd_bs_lt_native_bs 00:24:59.838 ************************************ 00:24:59.838 05:05:23 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:59.838 05:05:23 -- common/autotest_common.sh@650 -- # local es=0 00:24:59.838 05:05:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:59.838 05:05:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:59.838 05:05:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.838 05:05:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:59.838 05:05:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.838 05:05:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:59.838 { 00:24:59.838 "subsystems": [ 00:24:59.838 { 00:24:59.838 "subsystem": "bdev", 00:24:59.838 "config": [ 00:24:59.838 { 00:24:59.838 "params": { 00:24:59.838 "trtype": "pcie", 00:24:59.838 "traddr": "0000:00:06.0", 00:24:59.838 "name": "Nvme0" 00:24:59.838 }, 00:24:59.838 "method": "bdev_nvme_attach_controller" 00:24:59.838 }, 00:24:59.838 { 00:24:59.838 "method": "bdev_wait_for_examine" 00:24:59.838 } 00:24:59.838 ] 00:24:59.838 } 00:24:59.838 ] 00:24:59.838 } 00:24:59.838 05:05:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.838 05:05:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:59.838 05:05:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:59.838 05:05:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:00.097 [2024-11-18 05:05:23.373528] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:00.097 [2024-11-18 05:05:23.373686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88180 ] 00:25:00.097 [2024-11-18 05:05:23.548152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.357 [2024-11-18 05:05:23.781629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.616 [2024-11-18 05:05:24.109060] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:25:00.616 [2024-11-18 05:05:24.109160] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:01.184 [2024-11-18 05:05:24.501171] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:01.443 05:05:24 -- common/autotest_common.sh@653 -- # es=234 00:25:01.443 05:05:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:01.443 05:05:24 -- common/autotest_common.sh@662 -- # es=106 00:25:01.443 05:05:24 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:01.443 05:05:24 -- common/autotest_common.sh@670 -- # es=1 00:25:01.443 05:05:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:01.443 00:25:01.443 real 0m1.546s 00:25:01.443 user 0m1.232s 00:25:01.443 sys 0m0.235s 00:25:01.443 ************************************ 00:25:01.443 END TEST dd_bs_lt_native_bs 00:25:01.443 ************************************ 00:25:01.443 05:05:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:01.443 05:05:24 -- common/autotest_common.sh@10 -- # set +x 00:25:01.443 05:05:24 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:25:01.443 05:05:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:01.443 05:05:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:01.443 05:05:24 -- common/autotest_common.sh@10 -- # set +x 00:25:01.443 ************************************ 00:25:01.443 START TEST dd_rw 00:25:01.443 ************************************ 00:25:01.443 05:05:24 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:25:01.443 05:05:24 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:25:01.443 05:05:24 -- dd/basic_rw.sh@12 -- # local count size 00:25:01.443 05:05:24 -- dd/basic_rw.sh@13 -- # local qds bss 00:25:01.443 05:05:24 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:25:01.443 05:05:24 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:01.443 05:05:24 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:01.443 05:05:24 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:01.443 05:05:24 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:01.443 05:05:24 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:01.443 05:05:24 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:01.443 05:05:24 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:01.443 05:05:24 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:01.443 05:05:24 -- dd/basic_rw.sh@23 -- # count=15 00:25:01.443 05:05:24 -- dd/basic_rw.sh@24 -- # count=15 00:25:01.443 05:05:24 -- dd/basic_rw.sh@25 -- # size=61440 00:25:01.443 05:05:24 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:01.443 05:05:24 -- dd/common.sh@98 -- # xtrace_disable 00:25:01.443 05:05:24 -- common/autotest_common.sh@10 -- # set +x 00:25:02.012 05:05:25 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:25:02.012 05:05:25 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:02.012 05:05:25 -- dd/common.sh@31 -- # xtrace_disable 00:25:02.012 05:05:25 -- common/autotest_common.sh@10 -- # set +x 00:25:02.012 { 00:25:02.012 "subsystems": [ 00:25:02.012 { 00:25:02.012 "subsystem": "bdev", 00:25:02.012 "config": [ 00:25:02.012 { 00:25:02.012 "params": { 00:25:02.012 "trtype": "pcie", 00:25:02.012 "traddr": "0000:00:06.0", 00:25:02.012 "name": "Nvme0" 00:25:02.012 }, 00:25:02.012 "method": "bdev_nvme_attach_controller" 00:25:02.012 }, 00:25:02.012 { 00:25:02.012 "method": "bdev_wait_for_examine" 00:25:02.012 } 00:25:02.012 ] 00:25:02.012 } 00:25:02.012 ] 00:25:02.012 } 00:25:02.012 [2024-11-18 05:05:25.429073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:02.012 [2024-11-18 05:05:25.429257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88223 ] 00:25:02.271 [2024-11-18 05:05:25.597594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.271 [2024-11-18 05:05:25.746089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.530  [2024-11-18T05:05:26.991Z] Copying: 60/60 [kB] (average 19 MBps) 00:25:03.467 00:25:03.467 05:05:26 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:25:03.467 05:05:26 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:03.467 05:05:26 -- dd/common.sh@31 -- # xtrace_disable 00:25:03.467 05:05:26 -- common/autotest_common.sh@10 -- # set +x 00:25:03.467 { 00:25:03.467 "subsystems": [ 00:25:03.467 { 00:25:03.467 "subsystem": "bdev", 00:25:03.467 "config": [ 00:25:03.467 { 00:25:03.467 "params": { 00:25:03.467 "trtype": "pcie", 00:25:03.467 "traddr": "0000:00:06.0", 00:25:03.467 "name": "Nvme0" 00:25:03.467 }, 00:25:03.467 "method": "bdev_nvme_attach_controller" 00:25:03.467 }, 00:25:03.467 { 00:25:03.467 "method": "bdev_wait_for_examine" 00:25:03.467 } 00:25:03.467 ] 00:25:03.467 } 00:25:03.467 ] 00:25:03.467 } 00:25:03.726 [2024-11-18 05:05:27.013112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:03.726 [2024-11-18 05:05:27.013284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88244 ] 00:25:03.726 [2024-11-18 05:05:27.182782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.985 [2024-11-18 05:05:27.332316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.244  [2024-11-18T05:05:28.704Z] Copying: 60/60 [kB] (average 19 MBps) 00:25:05.180 00:25:05.180 05:05:28 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:05.180 05:05:28 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:05.180 05:05:28 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:05.180 05:05:28 -- dd/common.sh@11 -- # local nvme_ref= 00:25:05.180 05:05:28 -- dd/common.sh@12 -- # local size=61440 00:25:05.180 05:05:28 -- dd/common.sh@14 -- # local bs=1048576 00:25:05.180 05:05:28 -- dd/common.sh@15 -- # local count=1 00:25:05.180 05:05:28 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:05.180 05:05:28 -- dd/common.sh@18 -- # gen_conf 00:25:05.180 05:05:28 -- dd/common.sh@31 -- # xtrace_disable 00:25:05.180 05:05:28 -- common/autotest_common.sh@10 -- # set +x 00:25:05.180 { 00:25:05.180 "subsystems": [ 00:25:05.180 { 00:25:05.180 "subsystem": "bdev", 00:25:05.180 "config": [ 00:25:05.180 { 00:25:05.180 "params": { 00:25:05.180 "trtype": "pcie", 00:25:05.180 "traddr": "0000:00:06.0", 00:25:05.180 "name": "Nvme0" 00:25:05.180 }, 00:25:05.180 "method": "bdev_nvme_attach_controller" 00:25:05.180 }, 00:25:05.180 { 00:25:05.180 "method": "bdev_wait_for_examine" 00:25:05.180 } 00:25:05.180 ] 00:25:05.180 } 00:25:05.180 ] 00:25:05.180 } 00:25:05.180 [2024-11-18 05:05:28.442983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:05.180 [2024-11-18 05:05:28.443147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88268 ] 00:25:05.180 [2024-11-18 05:05:28.610260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.439 [2024-11-18 05:05:28.758495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.698  [2024-11-18T05:05:30.156Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:25:06.632 00:25:06.632 05:05:29 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:06.632 05:05:29 -- dd/basic_rw.sh@23 -- # count=15 00:25:06.632 05:05:29 -- dd/basic_rw.sh@24 -- # count=15 00:25:06.632 05:05:29 -- dd/basic_rw.sh@25 -- # size=61440 00:25:06.632 05:05:29 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:06.632 05:05:29 -- dd/common.sh@98 -- # xtrace_disable 00:25:06.632 05:05:29 -- common/autotest_common.sh@10 -- # set +x 00:25:07.198 05:05:30 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:25:07.198 05:05:30 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:07.198 05:05:30 -- dd/common.sh@31 -- # xtrace_disable 00:25:07.198 05:05:30 -- common/autotest_common.sh@10 -- # set +x 00:25:07.199 { 00:25:07.199 "subsystems": [ 00:25:07.199 { 00:25:07.199 "subsystem": "bdev", 00:25:07.199 "config": [ 00:25:07.199 { 00:25:07.199 "params": { 00:25:07.199 "trtype": "pcie", 00:25:07.199 "traddr": "0000:00:06.0", 00:25:07.199 "name": "Nvme0" 00:25:07.199 }, 00:25:07.199 "method": "bdev_nvme_attach_controller" 00:25:07.199 }, 00:25:07.199 { 00:25:07.199 "method": "bdev_wait_for_examine" 00:25:07.199 } 00:25:07.199 ] 00:25:07.199 } 00:25:07.199 ] 00:25:07.199 } 00:25:07.199 [2024-11-18 05:05:30.525410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:07.199 [2024-11-18 05:05:30.525573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88298 ] 00:25:07.199 [2024-11-18 05:05:30.690397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.458 [2024-11-18 05:05:30.839877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.718  [2024-11-18T05:05:32.179Z] Copying: 60/60 [kB] (average 58 MBps) 00:25:08.655 00:25:08.655 05:05:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:25:08.655 05:05:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:08.655 05:05:31 -- dd/common.sh@31 -- # xtrace_disable 00:25:08.655 05:05:31 -- common/autotest_common.sh@10 -- # set +x 00:25:08.655 { 00:25:08.655 "subsystems": [ 00:25:08.655 { 00:25:08.655 "subsystem": "bdev", 00:25:08.655 "config": [ 00:25:08.655 { 00:25:08.655 "params": { 00:25:08.655 "trtype": "pcie", 00:25:08.655 "traddr": "0000:00:06.0", 00:25:08.655 "name": "Nvme0" 00:25:08.655 }, 00:25:08.655 "method": "bdev_nvme_attach_controller" 00:25:08.655 }, 00:25:08.655 { 00:25:08.655 "method": "bdev_wait_for_examine" 00:25:08.655 } 00:25:08.655 ] 00:25:08.655 } 00:25:08.655 ] 00:25:08.656 } 00:25:08.656 [2024-11-18 05:05:31.947749] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:08.656 [2024-11-18 05:05:31.947911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88317 ] 00:25:08.656 [2024-11-18 05:05:32.114211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.915 [2024-11-18 05:05:32.262632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.174  [2024-11-18T05:05:33.635Z] Copying: 60/60 [kB] (average 58 MBps) 00:25:10.111 00:25:10.111 05:05:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:10.111 05:05:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:10.111 05:05:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:10.111 05:05:33 -- dd/common.sh@11 -- # local nvme_ref= 00:25:10.111 05:05:33 -- dd/common.sh@12 -- # local size=61440 00:25:10.111 05:05:33 -- dd/common.sh@14 -- # local bs=1048576 00:25:10.111 05:05:33 -- dd/common.sh@15 -- # local count=1 00:25:10.111 05:05:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:10.111 05:05:33 -- dd/common.sh@18 -- # gen_conf 00:25:10.111 05:05:33 -- dd/common.sh@31 -- # xtrace_disable 00:25:10.111 05:05:33 -- common/autotest_common.sh@10 -- # set +x 00:25:10.111 { 00:25:10.111 "subsystems": [ 00:25:10.111 { 00:25:10.111 "subsystem": "bdev", 00:25:10.111 "config": [ 00:25:10.111 { 00:25:10.111 "params": { 00:25:10.111 "trtype": "pcie", 00:25:10.111 "traddr": "0000:00:06.0", 00:25:10.111 "name": "Nvme0" 00:25:10.111 }, 00:25:10.111 "method": "bdev_nvme_attach_controller" 00:25:10.111 }, 00:25:10.111 { 00:25:10.111 "method": "bdev_wait_for_examine" 00:25:10.111 } 00:25:10.111 ] 00:25:10.111 } 00:25:10.111 ] 00:25:10.111 } 00:25:10.111 [2024-11-18 05:05:33.533831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:10.111 [2024-11-18 05:05:33.533981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88343 ] 00:25:10.370 [2024-11-18 05:05:33.701718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.370 [2024-11-18 05:05:33.855384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.629  [2024-11-18T05:05:35.088Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:11.564 00:25:11.564 05:05:34 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:11.564 05:05:34 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:11.564 05:05:34 -- dd/basic_rw.sh@23 -- # count=7 00:25:11.564 05:05:34 -- dd/basic_rw.sh@24 -- # count=7 00:25:11.564 05:05:34 -- dd/basic_rw.sh@25 -- # size=57344 00:25:11.564 05:05:34 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:11.564 05:05:34 -- dd/common.sh@98 -- # xtrace_disable 00:25:11.564 05:05:34 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 05:05:35 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:25:11.823 05:05:35 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:11.823 05:05:35 -- dd/common.sh@31 -- # xtrace_disable 00:25:11.823 05:05:35 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 { 00:25:11.823 "subsystems": [ 00:25:11.823 { 00:25:11.823 "subsystem": "bdev", 00:25:11.823 "config": [ 00:25:11.823 { 00:25:11.823 "params": { 00:25:11.823 "trtype": "pcie", 00:25:11.823 "traddr": "0000:00:06.0", 00:25:11.823 "name": "Nvme0" 00:25:11.823 }, 00:25:11.823 "method": "bdev_nvme_attach_controller" 00:25:11.823 }, 00:25:11.823 { 00:25:11.823 "method": "bdev_wait_for_examine" 00:25:11.823 } 00:25:11.823 ] 00:25:11.823 } 00:25:11.823 ] 00:25:11.823 } 00:25:12.080 [2024-11-18 05:05:35.381101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:12.081 [2024-11-18 05:05:35.381275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88373 ] 00:25:12.081 [2024-11-18 05:05:35.544654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.338 [2024-11-18 05:05:35.693685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.596  [2024-11-18T05:05:37.070Z] Copying: 56/56 [kB] (average 27 MBps) 00:25:13.546 00:25:13.546 05:05:36 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:25:13.546 05:05:36 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:13.546 05:05:36 -- dd/common.sh@31 -- # xtrace_disable 00:25:13.546 05:05:36 -- common/autotest_common.sh@10 -- # set +x 00:25:13.546 { 00:25:13.546 "subsystems": [ 00:25:13.546 { 00:25:13.546 "subsystem": "bdev", 00:25:13.546 "config": [ 00:25:13.546 { 00:25:13.546 "params": { 00:25:13.546 "trtype": "pcie", 00:25:13.546 "traddr": "0000:00:06.0", 00:25:13.546 "name": "Nvme0" 00:25:13.546 }, 00:25:13.546 "method": "bdev_nvme_attach_controller" 00:25:13.546 }, 00:25:13.546 { 00:25:13.546 "method": "bdev_wait_for_examine" 00:25:13.546 } 00:25:13.546 ] 00:25:13.546 } 00:25:13.546 ] 00:25:13.546 } 00:25:13.546 [2024-11-18 05:05:36.952691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:13.546 [2024-11-18 05:05:36.952843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88392 ] 00:25:13.805 [2024-11-18 05:05:37.122723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.805 [2024-11-18 05:05:37.291303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.373  [2024-11-18T05:05:38.465Z] Copying: 56/56 [kB] (average 27 MBps) 00:25:14.941 00:25:14.941 05:05:38 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:14.941 05:05:38 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:14.941 05:05:38 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:14.941 05:05:38 -- dd/common.sh@11 -- # local nvme_ref= 00:25:14.941 05:05:38 -- dd/common.sh@12 -- # local size=57344 00:25:14.941 05:05:38 -- dd/common.sh@14 -- # local bs=1048576 00:25:14.941 05:05:38 -- dd/common.sh@15 -- # local count=1 00:25:14.941 05:05:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:14.941 05:05:38 -- dd/common.sh@18 -- # gen_conf 00:25:14.941 05:05:38 -- dd/common.sh@31 -- # xtrace_disable 00:25:14.941 05:05:38 -- common/autotest_common.sh@10 -- # set +x 00:25:14.941 { 00:25:14.941 "subsystems": [ 00:25:14.941 { 00:25:14.941 "subsystem": "bdev", 00:25:14.941 "config": [ 00:25:14.941 { 00:25:14.941 "params": { 00:25:14.941 "trtype": "pcie", 00:25:14.941 "traddr": "0000:00:06.0", 00:25:14.941 "name": "Nvme0" 00:25:14.941 }, 00:25:14.941 "method": "bdev_nvme_attach_controller" 00:25:14.941 }, 00:25:14.941 { 00:25:14.941 "method": "bdev_wait_for_examine" 00:25:14.941 } 00:25:14.941 ] 00:25:14.941 } 00:25:14.941 ] 00:25:14.941 } 00:25:14.941 [2024-11-18 05:05:38.397670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:14.941 [2024-11-18 05:05:38.397834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88418 ] 00:25:15.200 [2024-11-18 05:05:38.562046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.200 [2024-11-18 05:05:38.715929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.768  [2024-11-18T05:05:40.229Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:16.705 00:25:16.705 05:05:39 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:16.705 05:05:39 -- dd/basic_rw.sh@23 -- # count=7 00:25:16.705 05:05:39 -- dd/basic_rw.sh@24 -- # count=7 00:25:16.705 05:05:39 -- dd/basic_rw.sh@25 -- # size=57344 00:25:16.705 05:05:39 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:16.705 05:05:39 -- dd/common.sh@98 -- # xtrace_disable 00:25:16.705 05:05:39 -- common/autotest_common.sh@10 -- # set +x 00:25:16.964 05:05:40 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:25:16.964 05:05:40 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:16.964 05:05:40 -- dd/common.sh@31 -- # xtrace_disable 00:25:16.964 05:05:40 -- common/autotest_common.sh@10 -- # set +x 00:25:16.964 { 00:25:16.964 "subsystems": [ 00:25:16.964 { 00:25:16.964 "subsystem": "bdev", 00:25:16.964 "config": [ 00:25:16.964 { 00:25:16.964 "params": { 00:25:16.964 "trtype": "pcie", 00:25:16.964 "traddr": "0000:00:06.0", 00:25:16.964 "name": "Nvme0" 00:25:16.964 }, 00:25:16.964 "method": "bdev_nvme_attach_controller" 00:25:16.964 }, 00:25:16.964 { 00:25:16.964 "method": "bdev_wait_for_examine" 00:25:16.964 } 00:25:16.964 ] 00:25:16.964 } 00:25:16.964 ] 00:25:16.964 } 00:25:16.964 [2024-11-18 05:05:40.413841] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:16.964 [2024-11-18 05:05:40.413994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88448 ] 00:25:17.223 [2024-11-18 05:05:40.577137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.223 [2024-11-18 05:05:40.725261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.791  [2024-11-18T05:05:41.883Z] Copying: 56/56 [kB] (average 54 MBps) 00:25:18.359 00:25:18.359 05:05:41 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:25:18.359 05:05:41 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:18.359 05:05:41 -- dd/common.sh@31 -- # xtrace_disable 00:25:18.359 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:25:18.359 { 00:25:18.359 "subsystems": [ 00:25:18.359 { 00:25:18.359 "subsystem": "bdev", 00:25:18.359 "config": [ 00:25:18.359 { 00:25:18.359 "params": { 00:25:18.359 "trtype": "pcie", 00:25:18.359 "traddr": "0000:00:06.0", 00:25:18.359 "name": "Nvme0" 00:25:18.359 }, 00:25:18.359 "method": "bdev_nvme_attach_controller" 00:25:18.359 }, 00:25:18.359 { 00:25:18.359 "method": "bdev_wait_for_examine" 00:25:18.359 } 00:25:18.359 ] 00:25:18.359 } 00:25:18.359 ] 00:25:18.359 } 00:25:18.359 [2024-11-18 05:05:41.822242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:18.359 [2024-11-18 05:05:41.822393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88467 ] 00:25:18.617 [2024-11-18 05:05:41.991254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.876 [2024-11-18 05:05:42.140230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.134  [2024-11-18T05:05:43.593Z] Copying: 56/56 [kB] (average 54 MBps) 00:25:20.069 00:25:20.069 05:05:43 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:20.069 05:05:43 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:20.069 05:05:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:20.069 05:05:43 -- dd/common.sh@11 -- # local nvme_ref= 00:25:20.069 05:05:43 -- dd/common.sh@12 -- # local size=57344 00:25:20.069 05:05:43 -- dd/common.sh@14 -- # local bs=1048576 00:25:20.069 05:05:43 -- dd/common.sh@15 -- # local count=1 00:25:20.069 05:05:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:20.069 05:05:43 -- dd/common.sh@18 -- # gen_conf 00:25:20.069 05:05:43 -- dd/common.sh@31 -- # xtrace_disable 00:25:20.069 05:05:43 -- common/autotest_common.sh@10 -- # set +x 00:25:20.069 { 00:25:20.069 "subsystems": [ 00:25:20.069 { 00:25:20.069 "subsystem": "bdev", 00:25:20.069 "config": [ 00:25:20.069 { 00:25:20.069 "params": { 00:25:20.069 "trtype": "pcie", 00:25:20.069 "traddr": "0000:00:06.0", 00:25:20.069 "name": "Nvme0" 00:25:20.069 }, 00:25:20.069 "method": "bdev_nvme_attach_controller" 00:25:20.069 }, 00:25:20.069 { 00:25:20.069 "method": "bdev_wait_for_examine" 00:25:20.069 } 00:25:20.069 ] 00:25:20.069 } 00:25:20.069 ] 00:25:20.069 } 00:25:20.069 [2024-11-18 05:05:43.410469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:20.069 [2024-11-18 05:05:43.410626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88493 ] 00:25:20.069 [2024-11-18 05:05:43.579731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.328 [2024-11-18 05:05:43.736244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.600  [2024-11-18T05:05:45.109Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:21.585 00:25:21.585 05:05:44 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:21.585 05:05:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:21.585 05:05:44 -- dd/basic_rw.sh@23 -- # count=3 00:25:21.585 05:05:44 -- dd/basic_rw.sh@24 -- # count=3 00:25:21.585 05:05:44 -- dd/basic_rw.sh@25 -- # size=49152 00:25:21.585 05:05:44 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:21.585 05:05:44 -- dd/common.sh@98 -- # xtrace_disable 00:25:21.585 05:05:44 -- common/autotest_common.sh@10 -- # set +x 00:25:21.845 05:05:45 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:25:21.845 05:05:45 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:21.845 05:05:45 -- dd/common.sh@31 -- # xtrace_disable 00:25:21.845 05:05:45 -- common/autotest_common.sh@10 -- # set +x 00:25:21.845 { 00:25:21.845 "subsystems": [ 00:25:21.845 { 00:25:21.845 "subsystem": "bdev", 00:25:21.845 "config": [ 00:25:21.845 { 00:25:21.845 "params": { 00:25:21.845 "trtype": "pcie", 00:25:21.845 "traddr": "0000:00:06.0", 00:25:21.845 "name": "Nvme0" 00:25:21.845 }, 00:25:21.845 "method": "bdev_nvme_attach_controller" 00:25:21.845 }, 00:25:21.845 { 00:25:21.845 "method": "bdev_wait_for_examine" 00:25:21.845 } 00:25:21.845 ] 00:25:21.845 } 00:25:21.845 ] 00:25:21.845 } 00:25:21.845 [2024-11-18 05:05:45.209036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:21.845 [2024-11-18 05:05:45.209224] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88517 ] 00:25:22.105 [2024-11-18 05:05:45.376065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.105 [2024-11-18 05:05:45.525421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.365  [2024-11-18T05:05:46.827Z] Copying: 48/48 [kB] (average 46 MBps) 00:25:23.303 00:25:23.303 05:05:46 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:25:23.303 05:05:46 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:23.303 05:05:46 -- dd/common.sh@31 -- # xtrace_disable 00:25:23.303 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:25:23.303 { 00:25:23.303 "subsystems": [ 00:25:23.303 { 00:25:23.303 "subsystem": "bdev", 00:25:23.303 "config": [ 00:25:23.303 { 00:25:23.303 "params": { 00:25:23.303 "trtype": "pcie", 00:25:23.303 "traddr": "0000:00:06.0", 00:25:23.303 "name": "Nvme0" 00:25:23.303 }, 00:25:23.303 "method": "bdev_nvme_attach_controller" 00:25:23.303 }, 00:25:23.303 { 00:25:23.303 "method": "bdev_wait_for_examine" 00:25:23.303 } 00:25:23.303 ] 00:25:23.303 } 00:25:23.303 ] 00:25:23.303 } 00:25:23.303 [2024-11-18 05:05:46.791239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:23.303 [2024-11-18 05:05:46.791401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88542 ] 00:25:23.562 [2024-11-18 05:05:46.960943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.822 [2024-11-18 05:05:47.111760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.081  [2024-11-18T05:05:48.543Z] Copying: 48/48 [kB] (average 46 MBps) 00:25:25.019 00:25:25.019 05:05:48 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:25.019 05:05:48 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:25.019 05:05:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:25.019 05:05:48 -- dd/common.sh@11 -- # local nvme_ref= 00:25:25.019 05:05:48 -- dd/common.sh@12 -- # local size=49152 00:25:25.019 05:05:48 -- dd/common.sh@14 -- # local bs=1048576 00:25:25.019 05:05:48 -- dd/common.sh@15 -- # local count=1 00:25:25.019 05:05:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:25.019 05:05:48 -- dd/common.sh@18 -- # gen_conf 00:25:25.019 05:05:48 -- dd/common.sh@31 -- # xtrace_disable 00:25:25.019 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:25:25.019 { 00:25:25.019 "subsystems": [ 00:25:25.019 { 00:25:25.019 "subsystem": "bdev", 00:25:25.019 "config": [ 00:25:25.019 { 00:25:25.019 "params": { 00:25:25.019 "trtype": "pcie", 00:25:25.019 "traddr": "0000:00:06.0", 00:25:25.019 "name": "Nvme0" 00:25:25.019 }, 00:25:25.019 "method": "bdev_nvme_attach_controller" 00:25:25.019 }, 00:25:25.019 { 00:25:25.019 "method": "bdev_wait_for_examine" 00:25:25.019 } 00:25:25.019 ] 00:25:25.019 } 00:25:25.019 ] 00:25:25.019 } 00:25:25.019 [2024-11-18 05:05:48.295552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:25.019 [2024-11-18 05:05:48.295705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88562 ] 00:25:25.019 [2024-11-18 05:05:48.463957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.278 [2024-11-18 05:05:48.617781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.537  [2024-11-18T05:05:50.000Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:26.476 00:25:26.476 05:05:49 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:26.476 05:05:49 -- dd/basic_rw.sh@23 -- # count=3 00:25:26.476 05:05:49 -- dd/basic_rw.sh@24 -- # count=3 00:25:26.476 05:05:49 -- dd/basic_rw.sh@25 -- # size=49152 00:25:26.476 05:05:49 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:26.476 05:05:49 -- dd/common.sh@98 -- # xtrace_disable 00:25:26.476 05:05:49 -- common/autotest_common.sh@10 -- # set +x 00:25:26.735 05:05:50 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:25:26.735 05:05:50 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:26.735 05:05:50 -- dd/common.sh@31 -- # xtrace_disable 00:25:26.735 05:05:50 -- common/autotest_common.sh@10 -- # set +x 00:25:26.735 { 00:25:26.735 "subsystems": [ 00:25:26.735 { 00:25:26.735 "subsystem": "bdev", 00:25:26.735 "config": [ 00:25:26.735 { 00:25:26.735 "params": { 00:25:26.735 "trtype": "pcie", 00:25:26.735 "traddr": "0000:00:06.0", 00:25:26.735 "name": "Nvme0" 00:25:26.735 }, 00:25:26.735 "method": "bdev_nvme_attach_controller" 00:25:26.735 }, 00:25:26.735 { 00:25:26.735 "method": "bdev_wait_for_examine" 00:25:26.735 } 00:25:26.735 ] 00:25:26.735 } 00:25:26.735 ] 00:25:26.735 } 00:25:26.735 [2024-11-18 05:05:50.246737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:26.735 [2024-11-18 05:05:50.246892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88592 ] 00:25:26.994 [2024-11-18 05:05:50.414069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.253 [2024-11-18 05:05:50.569481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.513  [2024-11-18T05:05:51.605Z] Copying: 48/48 [kB] (average 46 MBps) 00:25:28.081 00:25:28.081 05:05:51 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:25:28.081 05:05:51 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:28.081 05:05:51 -- dd/common.sh@31 -- # xtrace_disable 00:25:28.081 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:25:28.340 { 00:25:28.340 "subsystems": [ 00:25:28.340 { 00:25:28.340 "subsystem": "bdev", 00:25:28.340 "config": [ 00:25:28.340 { 00:25:28.340 "params": { 00:25:28.340 "trtype": "pcie", 00:25:28.340 "traddr": "0000:00:06.0", 00:25:28.340 "name": "Nvme0" 00:25:28.340 }, 00:25:28.340 "method": "bdev_nvme_attach_controller" 00:25:28.340 }, 00:25:28.340 { 00:25:28.340 "method": "bdev_wait_for_examine" 00:25:28.340 } 00:25:28.340 ] 00:25:28.340 } 00:25:28.340 ] 00:25:28.340 } 00:25:28.340 [2024-11-18 05:05:51.661845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:28.341 [2024-11-18 05:05:51.662007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88617 ] 00:25:28.341 [2024-11-18 05:05:51.822535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.600 [2024-11-18 05:05:51.971238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.860  [2024-11-18T05:05:53.321Z] Copying: 48/48 [kB] (average 46 MBps) 00:25:29.797 00:25:29.797 05:05:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:29.797 05:05:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:29.797 05:05:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:29.797 05:05:53 -- dd/common.sh@11 -- # local nvme_ref= 00:25:29.797 05:05:53 -- dd/common.sh@12 -- # local size=49152 00:25:29.797 05:05:53 -- dd/common.sh@14 -- # local bs=1048576 00:25:29.797 05:05:53 -- dd/common.sh@15 -- # local count=1 00:25:29.797 05:05:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:29.797 05:05:53 -- dd/common.sh@18 -- # gen_conf 00:25:29.797 05:05:53 -- dd/common.sh@31 -- # xtrace_disable 00:25:29.797 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:25:29.797 { 00:25:29.797 "subsystems": [ 00:25:29.797 { 00:25:29.797 "subsystem": "bdev", 00:25:29.797 "config": [ 00:25:29.797 { 00:25:29.797 "params": { 00:25:29.797 "trtype": "pcie", 00:25:29.797 "traddr": "0000:00:06.0", 00:25:29.797 "name": "Nvme0" 00:25:29.797 }, 00:25:29.797 "method": "bdev_nvme_attach_controller" 00:25:29.797 }, 00:25:29.797 { 00:25:29.797 "method": "bdev_wait_for_examine" 00:25:29.797 } 00:25:29.797 ] 00:25:29.797 } 00:25:29.797 ] 00:25:29.797 } 00:25:29.797 [2024-11-18 05:05:53.222969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:29.798 [2024-11-18 05:05:53.223090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88637 ] 00:25:30.056 [2024-11-18 05:05:53.373413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.056 [2024-11-18 05:05:53.531759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.315  [2024-11-18T05:05:54.777Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:31.253 00:25:31.253 ************************************ 00:25:31.253 END TEST dd_rw 00:25:31.253 ************************************ 00:25:31.253 00:25:31.253 real 0m29.766s 00:25:31.253 user 0m24.195s 00:25:31.253 sys 0m3.863s 00:25:31.253 05:05:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:31.253 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:25:31.253 05:05:54 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:25:31.253 05:05:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:31.253 05:05:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:31.253 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:25:31.253 ************************************ 00:25:31.253 START TEST dd_rw_offset 00:25:31.253 ************************************ 00:25:31.253 05:05:54 -- common/autotest_common.sh@1114 -- # basic_offset 00:25:31.253 05:05:54 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:25:31.253 05:05:54 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:25:31.253 05:05:54 -- dd/common.sh@98 -- # xtrace_disable 00:25:31.253 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:25:31.253 05:05:54 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:25:31.254 05:05:54 -- dd/basic_rw.sh@56 -- # data=50ek0673zovkze1kocjfapx4g8pektg48ly96rsvp5bb6e5o7k0gh09uqj57mi4acfdmldeg7pk0am1xktjh24rdd1wutrejpns8t0i49lp0m687pfgl2b0nnuas3yb1382q5srnlfde89q81m4qj28y6wfdf9jyjzbt69f87va5k2zq14yyvygqnvzwqhd5muyg2y74quiey0efchja9watvv0w47iwtr1lrxhf1avs4bxvmhgdbbbd0m8w20k6obwmcq0urfcu2s5besxnojlasa6rxwfa7zp3qtj4f6ztfvvok8w7ryzilb8bmdgvbdd7y720ej7t9dx2v1tcjtlq9o08iv17opi063j7n2a94xjp5napphkuiq7ceyftq8k79ygmymlrtrchaux4akuizd8hbnu7g0f2m7d3sdw56wsntjvg8o65s1o0ltdjnhxsc207fii0hc2y9ce8tfkybo2adrq42rdjzvnjlixb4tzx6ywp9xl76u1u0hksyv9x9pqlbtu3axp9qulljxoxm8cuev7o626w6dqn13vao103h86txrfop2qj4r1qy638qmfyqbhbm6vtxnbnury5c05k8rlqjv7708gpqyecky3ksdd6srgagmzjwnwmmmasfxsmt1d5yzyzzaix8fmxosxy3vjy7fhahepd0edbcsh5w45bcz9xcvu9lp9wltchu9vy28ekwda79lc3cgudvtqa5gy2t7s9a885seqjeaht7344h73g3jocnjtdf7nm5a668s2gq4xndhhhpsbyrszdcjm324podig9qxxlc9f8w5lpobh3pvabvo7l2vxbh1yjg1fphrdc8et2osv2t9d9sz5ztkri8522qoe5l66yfeai5oeokiy2frtbyqe9pyuyd577gzs4prq4joeprpx9v2c2c2606lusrf71gbkmazt4drwy5p5s63hlj1xaypymyxs4crfeaopcjr1yixig6ozyht5t8ualk0hfbcz0eklkmqvvrmji3bomnalpeiwywvpfr1dm91b8ogfynwozfzzyyt9ly2k43pef336yk66ilpdj7l4jt592cidslao4zbdhneidpm05very8h32t3ohtnh4lhof92jg15ckptkgokv2lz0zp4pwplpneazyue5jstp4abquvnad4pd33hjj0drcj291x2bhn78y0t3mkv8k2c7p1wbs8s7r4aioi3ba28b6ho1bgn1pqklo79qogw620dvjqah6v7fiblrcy1ad9l09z2786nirvjbz5liubqpwucdyqpnggmgd5rnq2vgid6rc5rxphk8ywimf5flhaq6prnyzobfd6nd61gclo92zx6cvcnvxj0idfx5jcaw3rdjbv1wml211yjk7sj5uizgvhutzgvzbkxgoorpxr60b7mw5i2clufa3fglja1a58rf5lsk1ejq9itbntscs5p21h9ke4e3gjp10ckwtq41dw22ogpv28f1ur417s96gnc8b3efcvqzp0yjt0n5byxbszdzzqk5iql8ndx2x3l031su34sedqa96x8bsnrzhdzykpwut61yaqmz83gwrrhoj8jhhplhvomv3rbf4fu2wiu4r074dttrhr8y9b50w85ek8caz10nsm9bwolq7czw1krhf5al83j5no4ziml698n7p8th96820cywy3aomwxwqyo8ey3brtn174atd93s1ejoz7ucrsl6raz4kzalv2z1oz4excrjigv27dr2k4zl9knd3zzamm1nrf5pwds8wutwty4cqt2u2dgfz9j73r5uyw7mswx88as8kt5c1cr7aydtzomjmwra8czv08lijb7a939n12selfeswkh65s3747im9umzqgc3l5mvm136ilc5etwafx8gx9hzgsthr1klxkz7ellkvzwitdwkap3b0zyhnf4vex6235152ske6jveyjxxrvynkh196n32m0ofczokkmfk78e5mx52r1tp7mriysv7r3irww7l5w801pcf69w98rr2su5yojmezjregurlxcgh6aj2faewf07w024thfkwuwhhypadqmv8lhei6gnv5vkfvhowad1uk2272m9mw8fz0jldnd0bk4ibode7md9qpex9szj54zadkzxk42xntxqori5g3t8nu4hw2739ihreu2pynb7fxs8vx8m60b10kkh1w7jd4sy6p06zbddh8lf7r8qc0hr9vhzqchq860zy6fz9291cpvnajq5rl9bji5n15a1pxv7gi138yts5ccznvb5dbua9n85hoizrgjbsa5g5m8zp6kj88cw8x3ksfd09qcfz2jud54w10wa0c0orql8vz4lpeibsqgk4xl0dnphj4fwhwz31ay75pz8lght5457qo99jb3vd6x3wwvmdwq41y2o7gjw0k737qnd5f4p1bed59504dgam9b7g0s3u5hat4ipxcflj7vjmov06vo084qr3voc5kd0c63poak61p6n8getbgfxoczgpx7v87p1ynogezwbhonbpsme5mokntfufviktwxpxdq8qv6goicqo8rgs4z72x4ipsafy19vovsfvz9fjimue5dji7mmied8kodqwkve74qvcmej4mruvtfo5uomb10etmfqpcpq8fdhl3xyl8hsy5zl9dwuljh9weldj16x5jhubbd43epv5ymn9int6n5vly2iwlj06yma15umv2925o7142ijd3vso5c9anh3s45ztu20z9y13c19meo5bem3hbhk3ennvsuf5y7jguq0nai9kv615vii4mno8c5h1c410auunl4ry063gqoehlil6cmu8i6bvrnc3frglajmkk4xtwqkg7z8dwjagywz36iam4y7apnaulzwfmlo5imdotsh9992655d8lpr8hcjqxdxwk7nyjo84yxj6btsw2pm9heoowvha1joyenku9kx4zi7jwiv0gjdus362kvbie4tkq25jdqdteg2i7vkhvrd5i83fz696l5sjma8umd7mmoo0xi1eel7z0the4tynept6azpphi5njpv235353kvi7fs8c3ox6h5fsf67zwyk2md9tmbh6nkf1dttjm7cyembpzkuekb2gcsfycgac7xflp75l0mnwcqytt9ugun4hr6rsu7e5lbqq81te4uidag7dbkfq3kwipstvf8bo7blx4yy5ccvzs8co3ivetmn8ph5opzuddr858h4wad87hioooqznpu7sz99nejwozd6hw7ixt1lcejx04gqr6eb8kv85bujfnuvq3833fshwugmf87osngr94rps61ylmvis167r1z8rary88c3880pze9rrnm0i8vdpxognnhlyjrm9dkbjktc560zv3rj8ejstxzmidyko6gjpuc6lpqg4hlo6m1z6rdcdtajxx5sxidpswyxjty0xzlg1dr4rwk992fduxf8099crsbvx7ixsadr0edav6nwai08cei5xgre9cjhxx5u8vsjgo58ddctr9dqs1pz8bu5s1nv4nrp72mrjwnp0nybgxhjzgxnjl76h83ehds0edllwsjiq2b6i5hejjbhafb9b9bt4niqkdm58qcn9lhxju479cij3ikecg50hsdt8keynx1e64xwpgf94257u2d5lh202sso4rj7elrhu9uu8z9ajex3k0qxzytd0a7nos1jnll0hbgfuow9i6fsjx1vn19b0ku6kk3kqv7dvcadtwg0vc8qqzsjeprb95bhry5fs1scneg6eq736ws6ykmu1crk6t0rwzwu6me037uwqdivvh6k46akpake3y820rmtfqd9li4wcf82b7s61hs5hbe0tyyjphxoj57ptw728h12pqwfm8tbcv6muk4ij5i13x1r9a732jqb672yo3org9htgjgtwd2q4cyt5tjri0hrw7u37x0mtkkets4kqcahz69y3xp865zuvnvy36u264x3zm9ig4qcgtiq6396tm1u3trq5v82easlia9ww917bajw0gxw6sclcs2rqsrs1y118bji3vrmzgtbsiej7fjqtp695nm7zwopzjtw8hp4f 00:25:31.254 05:05:54 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:25:31.254 05:05:54 -- dd/basic_rw.sh@59 -- # gen_conf 00:25:31.254 05:05:54 -- dd/common.sh@31 -- # xtrace_disable 00:25:31.254 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:25:31.254 { 00:25:31.254 "subsystems": [ 00:25:31.254 { 00:25:31.254 "subsystem": "bdev", 00:25:31.254 "config": [ 00:25:31.254 { 00:25:31.254 "params": { 00:25:31.254 "trtype": "pcie", 00:25:31.254 "traddr": "0000:00:06.0", 00:25:31.254 "name": "Nvme0" 00:25:31.254 }, 00:25:31.254 "method": "bdev_nvme_attach_controller" 00:25:31.254 }, 00:25:31.254 { 00:25:31.254 "method": "bdev_wait_for_examine" 00:25:31.254 } 00:25:31.254 ] 00:25:31.254 } 00:25:31.254 ] 00:25:31.254 } 00:25:31.513 [2024-11-18 05:05:54.811857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:31.513 [2024-11-18 05:05:54.812009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88683 ] 00:25:31.513 [2024-11-18 05:05:54.979904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.771 [2024-11-18 05:05:55.129311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.030  [2024-11-18T05:05:56.492Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:25:32.968 00:25:32.968 05:05:56 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:25:32.968 05:05:56 -- dd/basic_rw.sh@65 -- # gen_conf 00:25:32.968 05:05:56 -- dd/common.sh@31 -- # xtrace_disable 00:25:32.968 05:05:56 -- common/autotest_common.sh@10 -- # set +x 00:25:32.968 { 00:25:32.968 "subsystems": [ 00:25:32.968 { 00:25:32.968 "subsystem": "bdev", 00:25:32.968 "config": [ 00:25:32.968 { 00:25:32.968 "params": { 00:25:32.968 "trtype": "pcie", 00:25:32.968 "traddr": "0000:00:06.0", 00:25:32.968 "name": "Nvme0" 00:25:32.968 }, 00:25:32.968 "method": "bdev_nvme_attach_controller" 00:25:32.968 }, 00:25:32.968 { 00:25:32.968 "method": "bdev_wait_for_examine" 00:25:32.968 } 00:25:32.968 ] 00:25:32.968 } 00:25:32.968 ] 00:25:32.968 } 00:25:32.968 [2024-11-18 05:05:56.383913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:32.968 [2024-11-18 05:05:56.384075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88703 ] 00:25:33.226 [2024-11-18 05:05:56.554692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.226 [2024-11-18 05:05:56.706681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.485  [2024-11-18T05:05:57.948Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:25:34.424 00:25:34.424 05:05:57 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:25:34.424 ************************************ 00:25:34.424 END TEST dd_rw_offset 00:25:34.424 ************************************ 00:25:34.425 05:05:57 -- dd/basic_rw.sh@72 -- # [[ 50ek0673zovkze1kocjfapx4g8pektg48ly96rsvp5bb6e5o7k0gh09uqj57mi4acfdmldeg7pk0am1xktjh24rdd1wutrejpns8t0i49lp0m687pfgl2b0nnuas3yb1382q5srnlfde89q81m4qj28y6wfdf9jyjzbt69f87va5k2zq14yyvygqnvzwqhd5muyg2y74quiey0efchja9watvv0w47iwtr1lrxhf1avs4bxvmhgdbbbd0m8w20k6obwmcq0urfcu2s5besxnojlasa6rxwfa7zp3qtj4f6ztfvvok8w7ryzilb8bmdgvbdd7y720ej7t9dx2v1tcjtlq9o08iv17opi063j7n2a94xjp5napphkuiq7ceyftq8k79ygmymlrtrchaux4akuizd8hbnu7g0f2m7d3sdw56wsntjvg8o65s1o0ltdjnhxsc207fii0hc2y9ce8tfkybo2adrq42rdjzvnjlixb4tzx6ywp9xl76u1u0hksyv9x9pqlbtu3axp9qulljxoxm8cuev7o626w6dqn13vao103h86txrfop2qj4r1qy638qmfyqbhbm6vtxnbnury5c05k8rlqjv7708gpqyecky3ksdd6srgagmzjwnwmmmasfxsmt1d5yzyzzaix8fmxosxy3vjy7fhahepd0edbcsh5w45bcz9xcvu9lp9wltchu9vy28ekwda79lc3cgudvtqa5gy2t7s9a885seqjeaht7344h73g3jocnjtdf7nm5a668s2gq4xndhhhpsbyrszdcjm324podig9qxxlc9f8w5lpobh3pvabvo7l2vxbh1yjg1fphrdc8et2osv2t9d9sz5ztkri8522qoe5l66yfeai5oeokiy2frtbyqe9pyuyd577gzs4prq4joeprpx9v2c2c2606lusrf71gbkmazt4drwy5p5s63hlj1xaypymyxs4crfeaopcjr1yixig6ozyht5t8ualk0hfbcz0eklkmqvvrmji3bomnalpeiwywvpfr1dm91b8ogfynwozfzzyyt9ly2k43pef336yk66ilpdj7l4jt592cidslao4zbdhneidpm05very8h32t3ohtnh4lhof92jg15ckptkgokv2lz0zp4pwplpneazyue5jstp4abquvnad4pd33hjj0drcj291x2bhn78y0t3mkv8k2c7p1wbs8s7r4aioi3ba28b6ho1bgn1pqklo79qogw620dvjqah6v7fiblrcy1ad9l09z2786nirvjbz5liubqpwucdyqpnggmgd5rnq2vgid6rc5rxphk8ywimf5flhaq6prnyzobfd6nd61gclo92zx6cvcnvxj0idfx5jcaw3rdjbv1wml211yjk7sj5uizgvhutzgvzbkxgoorpxr60b7mw5i2clufa3fglja1a58rf5lsk1ejq9itbntscs5p21h9ke4e3gjp10ckwtq41dw22ogpv28f1ur417s96gnc8b3efcvqzp0yjt0n5byxbszdzzqk5iql8ndx2x3l031su34sedqa96x8bsnrzhdzykpwut61yaqmz83gwrrhoj8jhhplhvomv3rbf4fu2wiu4r074dttrhr8y9b50w85ek8caz10nsm9bwolq7czw1krhf5al83j5no4ziml698n7p8th96820cywy3aomwxwqyo8ey3brtn174atd93s1ejoz7ucrsl6raz4kzalv2z1oz4excrjigv27dr2k4zl9knd3zzamm1nrf5pwds8wutwty4cqt2u2dgfz9j73r5uyw7mswx88as8kt5c1cr7aydtzomjmwra8czv08lijb7a939n12selfeswkh65s3747im9umzqgc3l5mvm136ilc5etwafx8gx9hzgsthr1klxkz7ellkvzwitdwkap3b0zyhnf4vex6235152ske6jveyjxxrvynkh196n32m0ofczokkmfk78e5mx52r1tp7mriysv7r3irww7l5w801pcf69w98rr2su5yojmezjregurlxcgh6aj2faewf07w024thfkwuwhhypadqmv8lhei6gnv5vkfvhowad1uk2272m9mw8fz0jldnd0bk4ibode7md9qpex9szj54zadkzxk42xntxqori5g3t8nu4hw2739ihreu2pynb7fxs8vx8m60b10kkh1w7jd4sy6p06zbddh8lf7r8qc0hr9vhzqchq860zy6fz9291cpvnajq5rl9bji5n15a1pxv7gi138yts5ccznvb5dbua9n85hoizrgjbsa5g5m8zp6kj88cw8x3ksfd09qcfz2jud54w10wa0c0orql8vz4lpeibsqgk4xl0dnphj4fwhwz31ay75pz8lght5457qo99jb3vd6x3wwvmdwq41y2o7gjw0k737qnd5f4p1bed59504dgam9b7g0s3u5hat4ipxcflj7vjmov06vo084qr3voc5kd0c63poak61p6n8getbgfxoczgpx7v87p1ynogezwbhonbpsme5mokntfufviktwxpxdq8qv6goicqo8rgs4z72x4ipsafy19vovsfvz9fjimue5dji7mmied8kodqwkve74qvcmej4mruvtfo5uomb10etmfqpcpq8fdhl3xyl8hsy5zl9dwuljh9weldj16x5jhubbd43epv5ymn9int6n5vly2iwlj06yma15umv2925o7142ijd3vso5c9anh3s45ztu20z9y13c19meo5bem3hbhk3ennvsuf5y7jguq0nai9kv615vii4mno8c5h1c410auunl4ry063gqoehlil6cmu8i6bvrnc3frglajmkk4xtwqkg7z8dwjagywz36iam4y7apnaulzwfmlo5imdotsh9992655d8lpr8hcjqxdxwk7nyjo84yxj6btsw2pm9heoowvha1joyenku9kx4zi7jwiv0gjdus362kvbie4tkq25jdqdteg2i7vkhvrd5i83fz696l5sjma8umd7mmoo0xi1eel7z0the4tynept6azpphi5njpv235353kvi7fs8c3ox6h5fsf67zwyk2md9tmbh6nkf1dttjm7cyembpzkuekb2gcsfycgac7xflp75l0mnwcqytt9ugun4hr6rsu7e5lbqq81te4uidag7dbkfq3kwipstvf8bo7blx4yy5ccvzs8co3ivetmn8ph5opzuddr858h4wad87hioooqznpu7sz99nejwozd6hw7ixt1lcejx04gqr6eb8kv85bujfnuvq3833fshwugmf87osngr94rps61ylmvis167r1z8rary88c3880pze9rrnm0i8vdpxognnhlyjrm9dkbjktc560zv3rj8ejstxzmidyko6gjpuc6lpqg4hlo6m1z6rdcdtajxx5sxidpswyxjty0xzlg1dr4rwk992fduxf8099crsbvx7ixsadr0edav6nwai08cei5xgre9cjhxx5u8vsjgo58ddctr9dqs1pz8bu5s1nv4nrp72mrjwnp0nybgxhjzgxnjl76h83ehds0edllwsjiq2b6i5hejjbhafb9b9bt4niqkdm58qcn9lhxju479cij3ikecg50hsdt8keynx1e64xwpgf94257u2d5lh202sso4rj7elrhu9uu8z9ajex3k0qxzytd0a7nos1jnll0hbgfuow9i6fsjx1vn19b0ku6kk3kqv7dvcadtwg0vc8qqzsjeprb95bhry5fs1scneg6eq736ws6ykmu1crk6t0rwzwu6me037uwqdivvh6k46akpake3y820rmtfqd9li4wcf82b7s61hs5hbe0tyyjphxoj57ptw728h12pqwfm8tbcv6muk4ij5i13x1r9a732jqb672yo3org9htgjgtwd2q4cyt5tjri0hrw7u37x0mtkkets4kqcahz69y3xp865zuvnvy36u264x3zm9ig4qcgtiq6396tm1u3trq5v82easlia9ww917bajw0gxw6sclcs2rqsrs1y118bji3vrmzgtbsiej7fjqtp695nm7zwopzjtw8hp4f == \5\0\e\k\0\6\7\3\z\o\v\k\z\e\1\k\o\c\j\f\a\p\x\4\g\8\p\e\k\t\g\4\8\l\y\9\6\r\s\v\p\5\b\b\6\e\5\o\7\k\0\g\h\0\9\u\q\j\5\7\m\i\4\a\c\f\d\m\l\d\e\g\7\p\k\0\a\m\1\x\k\t\j\h\2\4\r\d\d\1\w\u\t\r\e\j\p\n\s\8\t\0\i\4\9\l\p\0\m\6\8\7\p\f\g\l\2\b\0\n\n\u\a\s\3\y\b\1\3\8\2\q\5\s\r\n\l\f\d\e\8\9\q\8\1\m\4\q\j\2\8\y\6\w\f\d\f\9\j\y\j\z\b\t\6\9\f\8\7\v\a\5\k\2\z\q\1\4\y\y\v\y\g\q\n\v\z\w\q\h\d\5\m\u\y\g\2\y\7\4\q\u\i\e\y\0\e\f\c\h\j\a\9\w\a\t\v\v\0\w\4\7\i\w\t\r\1\l\r\x\h\f\1\a\v\s\4\b\x\v\m\h\g\d\b\b\b\d\0\m\8\w\2\0\k\6\o\b\w\m\c\q\0\u\r\f\c\u\2\s\5\b\e\s\x\n\o\j\l\a\s\a\6\r\x\w\f\a\7\z\p\3\q\t\j\4\f\6\z\t\f\v\v\o\k\8\w\7\r\y\z\i\l\b\8\b\m\d\g\v\b\d\d\7\y\7\2\0\e\j\7\t\9\d\x\2\v\1\t\c\j\t\l\q\9\o\0\8\i\v\1\7\o\p\i\0\6\3\j\7\n\2\a\9\4\x\j\p\5\n\a\p\p\h\k\u\i\q\7\c\e\y\f\t\q\8\k\7\9\y\g\m\y\m\l\r\t\r\c\h\a\u\x\4\a\k\u\i\z\d\8\h\b\n\u\7\g\0\f\2\m\7\d\3\s\d\w\5\6\w\s\n\t\j\v\g\8\o\6\5\s\1\o\0\l\t\d\j\n\h\x\s\c\2\0\7\f\i\i\0\h\c\2\y\9\c\e\8\t\f\k\y\b\o\2\a\d\r\q\4\2\r\d\j\z\v\n\j\l\i\x\b\4\t\z\x\6\y\w\p\9\x\l\7\6\u\1\u\0\h\k\s\y\v\9\x\9\p\q\l\b\t\u\3\a\x\p\9\q\u\l\l\j\x\o\x\m\8\c\u\e\v\7\o\6\2\6\w\6\d\q\n\1\3\v\a\o\1\0\3\h\8\6\t\x\r\f\o\p\2\q\j\4\r\1\q\y\6\3\8\q\m\f\y\q\b\h\b\m\6\v\t\x\n\b\n\u\r\y\5\c\0\5\k\8\r\l\q\j\v\7\7\0\8\g\p\q\y\e\c\k\y\3\k\s\d\d\6\s\r\g\a\g\m\z\j\w\n\w\m\m\m\a\s\f\x\s\m\t\1\d\5\y\z\y\z\z\a\i\x\8\f\m\x\o\s\x\y\3\v\j\y\7\f\h\a\h\e\p\d\0\e\d\b\c\s\h\5\w\4\5\b\c\z\9\x\c\v\u\9\l\p\9\w\l\t\c\h\u\9\v\y\2\8\e\k\w\d\a\7\9\l\c\3\c\g\u\d\v\t\q\a\5\g\y\2\t\7\s\9\a\8\8\5\s\e\q\j\e\a\h\t\7\3\4\4\h\7\3\g\3\j\o\c\n\j\t\d\f\7\n\m\5\a\6\6\8\s\2\g\q\4\x\n\d\h\h\h\p\s\b\y\r\s\z\d\c\j\m\3\2\4\p\o\d\i\g\9\q\x\x\l\c\9\f\8\w\5\l\p\o\b\h\3\p\v\a\b\v\o\7\l\2\v\x\b\h\1\y\j\g\1\f\p\h\r\d\c\8\e\t\2\o\s\v\2\t\9\d\9\s\z\5\z\t\k\r\i\8\5\2\2\q\o\e\5\l\6\6\y\f\e\a\i\5\o\e\o\k\i\y\2\f\r\t\b\y\q\e\9\p\y\u\y\d\5\7\7\g\z\s\4\p\r\q\4\j\o\e\p\r\p\x\9\v\2\c\2\c\2\6\0\6\l\u\s\r\f\7\1\g\b\k\m\a\z\t\4\d\r\w\y\5\p\5\s\6\3\h\l\j\1\x\a\y\p\y\m\y\x\s\4\c\r\f\e\a\o\p\c\j\r\1\y\i\x\i\g\6\o\z\y\h\t\5\t\8\u\a\l\k\0\h\f\b\c\z\0\e\k\l\k\m\q\v\v\r\m\j\i\3\b\o\m\n\a\l\p\e\i\w\y\w\v\p\f\r\1\d\m\9\1\b\8\o\g\f\y\n\w\o\z\f\z\z\y\y\t\9\l\y\2\k\4\3\p\e\f\3\3\6\y\k\6\6\i\l\p\d\j\7\l\4\j\t\5\9\2\c\i\d\s\l\a\o\4\z\b\d\h\n\e\i\d\p\m\0\5\v\e\r\y\8\h\3\2\t\3\o\h\t\n\h\4\l\h\o\f\9\2\j\g\1\5\c\k\p\t\k\g\o\k\v\2\l\z\0\z\p\4\p\w\p\l\p\n\e\a\z\y\u\e\5\j\s\t\p\4\a\b\q\u\v\n\a\d\4\p\d\3\3\h\j\j\0\d\r\c\j\2\9\1\x\2\b\h\n\7\8\y\0\t\3\m\k\v\8\k\2\c\7\p\1\w\b\s\8\s\7\r\4\a\i\o\i\3\b\a\2\8\b\6\h\o\1\b\g\n\1\p\q\k\l\o\7\9\q\o\g\w\6\2\0\d\v\j\q\a\h\6\v\7\f\i\b\l\r\c\y\1\a\d\9\l\0\9\z\2\7\8\6\n\i\r\v\j\b\z\5\l\i\u\b\q\p\w\u\c\d\y\q\p\n\g\g\m\g\d\5\r\n\q\2\v\g\i\d\6\r\c\5\r\x\p\h\k\8\y\w\i\m\f\5\f\l\h\a\q\6\p\r\n\y\z\o\b\f\d\6\n\d\6\1\g\c\l\o\9\2\z\x\6\c\v\c\n\v\x\j\0\i\d\f\x\5\j\c\a\w\3\r\d\j\b\v\1\w\m\l\2\1\1\y\j\k\7\s\j\5\u\i\z\g\v\h\u\t\z\g\v\z\b\k\x\g\o\o\r\p\x\r\6\0\b\7\m\w\5\i\2\c\l\u\f\a\3\f\g\l\j\a\1\a\5\8\r\f\5\l\s\k\1\e\j\q\9\i\t\b\n\t\s\c\s\5\p\2\1\h\9\k\e\4\e\3\g\j\p\1\0\c\k\w\t\q\4\1\d\w\2\2\o\g\p\v\2\8\f\1\u\r\4\1\7\s\9\6\g\n\c\8\b\3\e\f\c\v\q\z\p\0\y\j\t\0\n\5\b\y\x\b\s\z\d\z\z\q\k\5\i\q\l\8\n\d\x\2\x\3\l\0\3\1\s\u\3\4\s\e\d\q\a\9\6\x\8\b\s\n\r\z\h\d\z\y\k\p\w\u\t\6\1\y\a\q\m\z\8\3\g\w\r\r\h\o\j\8\j\h\h\p\l\h\v\o\m\v\3\r\b\f\4\f\u\2\w\i\u\4\r\0\7\4\d\t\t\r\h\r\8\y\9\b\5\0\w\8\5\e\k\8\c\a\z\1\0\n\s\m\9\b\w\o\l\q\7\c\z\w\1\k\r\h\f\5\a\l\8\3\j\5\n\o\4\z\i\m\l\6\9\8\n\7\p\8\t\h\9\6\8\2\0\c\y\w\y\3\a\o\m\w\x\w\q\y\o\8\e\y\3\b\r\t\n\1\7\4\a\t\d\9\3\s\1\e\j\o\z\7\u\c\r\s\l\6\r\a\z\4\k\z\a\l\v\2\z\1\o\z\4\e\x\c\r\j\i\g\v\2\7\d\r\2\k\4\z\l\9\k\n\d\3\z\z\a\m\m\1\n\r\f\5\p\w\d\s\8\w\u\t\w\t\y\4\c\q\t\2\u\2\d\g\f\z\9\j\7\3\r\5\u\y\w\7\m\s\w\x\8\8\a\s\8\k\t\5\c\1\c\r\7\a\y\d\t\z\o\m\j\m\w\r\a\8\c\z\v\0\8\l\i\j\b\7\a\9\3\9\n\1\2\s\e\l\f\e\s\w\k\h\6\5\s\3\7\4\7\i\m\9\u\m\z\q\g\c\3\l\5\m\v\m\1\3\6\i\l\c\5\e\t\w\a\f\x\8\g\x\9\h\z\g\s\t\h\r\1\k\l\x\k\z\7\e\l\l\k\v\z\w\i\t\d\w\k\a\p\3\b\0\z\y\h\n\f\4\v\e\x\6\2\3\5\1\5\2\s\k\e\6\j\v\e\y\j\x\x\r\v\y\n\k\h\1\9\6\n\3\2\m\0\o\f\c\z\o\k\k\m\f\k\7\8\e\5\m\x\5\2\r\1\t\p\7\m\r\i\y\s\v\7\r\3\i\r\w\w\7\l\5\w\8\0\1\p\c\f\6\9\w\9\8\r\r\2\s\u\5\y\o\j\m\e\z\j\r\e\g\u\r\l\x\c\g\h\6\a\j\2\f\a\e\w\f\0\7\w\0\2\4\t\h\f\k\w\u\w\h\h\y\p\a\d\q\m\v\8\l\h\e\i\6\g\n\v\5\v\k\f\v\h\o\w\a\d\1\u\k\2\2\7\2\m\9\m\w\8\f\z\0\j\l\d\n\d\0\b\k\4\i\b\o\d\e\7\m\d\9\q\p\e\x\9\s\z\j\5\4\z\a\d\k\z\x\k\4\2\x\n\t\x\q\o\r\i\5\g\3\t\8\n\u\4\h\w\2\7\3\9\i\h\r\e\u\2\p\y\n\b\7\f\x\s\8\v\x\8\m\6\0\b\1\0\k\k\h\1\w\7\j\d\4\s\y\6\p\0\6\z\b\d\d\h\8\l\f\7\r\8\q\c\0\h\r\9\v\h\z\q\c\h\q\8\6\0\z\y\6\f\z\9\2\9\1\c\p\v\n\a\j\q\5\r\l\9\b\j\i\5\n\1\5\a\1\p\x\v\7\g\i\1\3\8\y\t\s\5\c\c\z\n\v\b\5\d\b\u\a\9\n\8\5\h\o\i\z\r\g\j\b\s\a\5\g\5\m\8\z\p\6\k\j\8\8\c\w\8\x\3\k\s\f\d\0\9\q\c\f\z\2\j\u\d\5\4\w\1\0\w\a\0\c\0\o\r\q\l\8\v\z\4\l\p\e\i\b\s\q\g\k\4\x\l\0\d\n\p\h\j\4\f\w\h\w\z\3\1\a\y\7\5\p\z\8\l\g\h\t\5\4\5\7\q\o\9\9\j\b\3\v\d\6\x\3\w\w\v\m\d\w\q\4\1\y\2\o\7\g\j\w\0\k\7\3\7\q\n\d\5\f\4\p\1\b\e\d\5\9\5\0\4\d\g\a\m\9\b\7\g\0\s\3\u\5\h\a\t\4\i\p\x\c\f\l\j\7\v\j\m\o\v\0\6\v\o\0\8\4\q\r\3\v\o\c\5\k\d\0\c\6\3\p\o\a\k\6\1\p\6\n\8\g\e\t\b\g\f\x\o\c\z\g\p\x\7\v\8\7\p\1\y\n\o\g\e\z\w\b\h\o\n\b\p\s\m\e\5\m\o\k\n\t\f\u\f\v\i\k\t\w\x\p\x\d\q\8\q\v\6\g\o\i\c\q\o\8\r\g\s\4\z\7\2\x\4\i\p\s\a\f\y\1\9\v\o\v\s\f\v\z\9\f\j\i\m\u\e\5\d\j\i\7\m\m\i\e\d\8\k\o\d\q\w\k\v\e\7\4\q\v\c\m\e\j\4\m\r\u\v\t\f\o\5\u\o\m\b\1\0\e\t\m\f\q\p\c\p\q\8\f\d\h\l\3\x\y\l\8\h\s\y\5\z\l\9\d\w\u\l\j\h\9\w\e\l\d\j\1\6\x\5\j\h\u\b\b\d\4\3\e\p\v\5\y\m\n\9\i\n\t\6\n\5\v\l\y\2\i\w\l\j\0\6\y\m\a\1\5\u\m\v\2\9\2\5\o\7\1\4\2\i\j\d\3\v\s\o\5\c\9\a\n\h\3\s\4\5\z\t\u\2\0\z\9\y\1\3\c\1\9\m\e\o\5\b\e\m\3\h\b\h\k\3\e\n\n\v\s\u\f\5\y\7\j\g\u\q\0\n\a\i\9\k\v\6\1\5\v\i\i\4\m\n\o\8\c\5\h\1\c\4\1\0\a\u\u\n\l\4\r\y\0\6\3\g\q\o\e\h\l\i\l\6\c\m\u\8\i\6\b\v\r\n\c\3\f\r\g\l\a\j\m\k\k\4\x\t\w\q\k\g\7\z\8\d\w\j\a\g\y\w\z\3\6\i\a\m\4\y\7\a\p\n\a\u\l\z\w\f\m\l\o\5\i\m\d\o\t\s\h\9\9\9\2\6\5\5\d\8\l\p\r\8\h\c\j\q\x\d\x\w\k\7\n\y\j\o\8\4\y\x\j\6\b\t\s\w\2\p\m\9\h\e\o\o\w\v\h\a\1\j\o\y\e\n\k\u\9\k\x\4\z\i\7\j\w\i\v\0\g\j\d\u\s\3\6\2\k\v\b\i\e\4\t\k\q\2\5\j\d\q\d\t\e\g\2\i\7\v\k\h\v\r\d\5\i\8\3\f\z\6\9\6\l\5\s\j\m\a\8\u\m\d\7\m\m\o\o\0\x\i\1\e\e\l\7\z\0\t\h\e\4\t\y\n\e\p\t\6\a\z\p\p\h\i\5\n\j\p\v\2\3\5\3\5\3\k\v\i\7\f\s\8\c\3\o\x\6\h\5\f\s\f\6\7\z\w\y\k\2\m\d\9\t\m\b\h\6\n\k\f\1\d\t\t\j\m\7\c\y\e\m\b\p\z\k\u\e\k\b\2\g\c\s\f\y\c\g\a\c\7\x\f\l\p\7\5\l\0\m\n\w\c\q\y\t\t\9\u\g\u\n\4\h\r\6\r\s\u\7\e\5\l\b\q\q\8\1\t\e\4\u\i\d\a\g\7\d\b\k\f\q\3\k\w\i\p\s\t\v\f\8\b\o\7\b\l\x\4\y\y\5\c\c\v\z\s\8\c\o\3\i\v\e\t\m\n\8\p\h\5\o\p\z\u\d\d\r\8\5\8\h\4\w\a\d\8\7\h\i\o\o\o\q\z\n\p\u\7\s\z\9\9\n\e\j\w\o\z\d\6\h\w\7\i\x\t\1\l\c\e\j\x\0\4\g\q\r\6\e\b\8\k\v\8\5\b\u\j\f\n\u\v\q\3\8\3\3\f\s\h\w\u\g\m\f\8\7\o\s\n\g\r\9\4\r\p\s\6\1\y\l\m\v\i\s\1\6\7\r\1\z\8\r\a\r\y\8\8\c\3\8\8\0\p\z\e\9\r\r\n\m\0\i\8\v\d\p\x\o\g\n\n\h\l\y\j\r\m\9\d\k\b\j\k\t\c\5\6\0\z\v\3\r\j\8\e\j\s\t\x\z\m\i\d\y\k\o\6\g\j\p\u\c\6\l\p\q\g\4\h\l\o\6\m\1\z\6\r\d\c\d\t\a\j\x\x\5\s\x\i\d\p\s\w\y\x\j\t\y\0\x\z\l\g\1\d\r\4\r\w\k\9\9\2\f\d\u\x\f\8\0\9\9\c\r\s\b\v\x\7\i\x\s\a\d\r\0\e\d\a\v\6\n\w\a\i\0\8\c\e\i\5\x\g\r\e\9\c\j\h\x\x\5\u\8\v\s\j\g\o\5\8\d\d\c\t\r\9\d\q\s\1\p\z\8\b\u\5\s\1\n\v\4\n\r\p\7\2\m\r\j\w\n\p\0\n\y\b\g\x\h\j\z\g\x\n\j\l\7\6\h\8\3\e\h\d\s\0\e\d\l\l\w\s\j\i\q\2\b\6\i\5\h\e\j\j\b\h\a\f\b\9\b\9\b\t\4\n\i\q\k\d\m\5\8\q\c\n\9\l\h\x\j\u\4\7\9\c\i\j\3\i\k\e\c\g\5\0\h\s\d\t\8\k\e\y\n\x\1\e\6\4\x\w\p\g\f\9\4\2\5\7\u\2\d\5\l\h\2\0\2\s\s\o\4\r\j\7\e\l\r\h\u\9\u\u\8\z\9\a\j\e\x\3\k\0\q\x\z\y\t\d\0\a\7\n\o\s\1\j\n\l\l\0\h\b\g\f\u\o\w\9\i\6\f\s\j\x\1\v\n\1\9\b\0\k\u\6\k\k\3\k\q\v\7\d\v\c\a\d\t\w\g\0\v\c\8\q\q\z\s\j\e\p\r\b\9\5\b\h\r\y\5\f\s\1\s\c\n\e\g\6\e\q\7\3\6\w\s\6\y\k\m\u\1\c\r\k\6\t\0\r\w\z\w\u\6\m\e\0\3\7\u\w\q\d\i\v\v\h\6\k\4\6\a\k\p\a\k\e\3\y\8\2\0\r\m\t\f\q\d\9\l\i\4\w\c\f\8\2\b\7\s\6\1\h\s\5\h\b\e\0\t\y\y\j\p\h\x\o\j\5\7\p\t\w\7\2\8\h\1\2\p\q\w\f\m\8\t\b\c\v\6\m\u\k\4\i\j\5\i\1\3\x\1\r\9\a\7\3\2\j\q\b\6\7\2\y\o\3\o\r\g\9\h\t\g\j\g\t\w\d\2\q\4\c\y\t\5\t\j\r\i\0\h\r\w\7\u\3\7\x\0\m\t\k\k\e\t\s\4\k\q\c\a\h\z\6\9\y\3\x\p\8\6\5\z\u\v\n\v\y\3\6\u\2\6\4\x\3\z\m\9\i\g\4\q\c\g\t\i\q\6\3\9\6\t\m\1\u\3\t\r\q\5\v\8\2\e\a\s\l\i\a\9\w\w\9\1\7\b\a\j\w\0\g\x\w\6\s\c\l\c\s\2\r\q\s\r\s\1\y\1\1\8\b\j\i\3\v\r\m\z\g\t\b\s\i\e\j\7\f\j\q\t\p\6\9\5\n\m\7\z\w\o\p\z\j\t\w\8\h\p\4\f ]] 00:25:34.425 00:25:34.425 real 0m3.110s 00:25:34.425 user 0m2.529s 00:25:34.425 sys 0m0.400s 00:25:34.425 05:05:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:34.425 05:05:57 -- common/autotest_common.sh@10 -- # set +x 00:25:34.425 05:05:57 -- dd/basic_rw.sh@1 -- # cleanup 00:25:34.425 05:05:57 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:25:34.425 05:05:57 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:34.425 05:05:57 -- dd/common.sh@11 -- # local nvme_ref= 00:25:34.425 05:05:57 -- dd/common.sh@12 -- # local size=0xffff 00:25:34.425 05:05:57 -- dd/common.sh@14 -- # local bs=1048576 00:25:34.425 05:05:57 -- dd/common.sh@15 -- # local count=1 00:25:34.425 05:05:57 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:34.425 05:05:57 -- dd/common.sh@18 -- # gen_conf 00:25:34.425 05:05:57 -- dd/common.sh@31 -- # xtrace_disable 00:25:34.425 05:05:57 -- common/autotest_common.sh@10 -- # set +x 00:25:34.425 { 00:25:34.425 "subsystems": [ 00:25:34.425 { 00:25:34.425 "subsystem": "bdev", 00:25:34.425 "config": [ 00:25:34.425 { 00:25:34.425 "params": { 00:25:34.425 "trtype": "pcie", 00:25:34.425 "traddr": "0000:00:06.0", 00:25:34.425 "name": "Nvme0" 00:25:34.425 }, 00:25:34.425 "method": "bdev_nvme_attach_controller" 00:25:34.425 }, 00:25:34.425 { 00:25:34.425 "method": "bdev_wait_for_examine" 00:25:34.425 } 00:25:34.425 ] 00:25:34.425 } 00:25:34.425 ] 00:25:34.425 } 00:25:34.425 [2024-11-18 05:05:57.919611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:34.425 [2024-11-18 05:05:57.919767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88743 ] 00:25:34.684 [2024-11-18 05:05:58.087627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.943 [2024-11-18 05:05:58.239825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.202  [2024-11-18T05:05:59.663Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:36.139 00:25:36.139 05:05:59 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:36.139 00:25:36.139 real 0m36.596s 00:25:36.139 user 0m29.480s 00:25:36.139 sys 0m4.968s 00:25:36.139 05:05:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:36.139 ************************************ 00:25:36.139 END TEST spdk_dd_basic_rw 00:25:36.139 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:25:36.139 ************************************ 00:25:36.139 05:05:59 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:36.139 05:05:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:36.139 05:05:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:36.139 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:25:36.139 ************************************ 00:25:36.139 START TEST spdk_dd_posix 00:25:36.139 ************************************ 00:25:36.139 05:05:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:36.139 * Looking for test storage... 00:25:36.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:36.139 05:05:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:36.139 05:05:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:36.139 05:05:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:36.139 05:05:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:36.139 05:05:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:36.139 05:05:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:36.139 05:05:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:36.139 05:05:59 -- scripts/common.sh@335 -- # IFS=.-: 00:25:36.139 05:05:59 -- scripts/common.sh@335 -- # read -ra ver1 00:25:36.139 05:05:59 -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.139 05:05:59 -- scripts/common.sh@336 -- # read -ra ver2 00:25:36.139 05:05:59 -- scripts/common.sh@337 -- # local 'op=<' 00:25:36.139 05:05:59 -- scripts/common.sh@339 -- # ver1_l=2 00:25:36.139 05:05:59 -- scripts/common.sh@340 -- # ver2_l=1 00:25:36.139 05:05:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:36.139 05:05:59 -- scripts/common.sh@343 -- # case "$op" in 00:25:36.139 05:05:59 -- scripts/common.sh@344 -- # : 1 00:25:36.139 05:05:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:36.139 05:05:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.139 05:05:59 -- scripts/common.sh@364 -- # decimal 1 00:25:36.139 05:05:59 -- scripts/common.sh@352 -- # local d=1 00:25:36.139 05:05:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.139 05:05:59 -- scripts/common.sh@354 -- # echo 1 00:25:36.139 05:05:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:36.139 05:05:59 -- scripts/common.sh@365 -- # decimal 2 00:25:36.139 05:05:59 -- scripts/common.sh@352 -- # local d=2 00:25:36.139 05:05:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.139 05:05:59 -- scripts/common.sh@354 -- # echo 2 00:25:36.139 05:05:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:36.139 05:05:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:36.139 05:05:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:36.139 05:05:59 -- scripts/common.sh@367 -- # return 0 00:25:36.139 05:05:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.139 05:05:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:36.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.139 --rc genhtml_branch_coverage=1 00:25:36.139 --rc genhtml_function_coverage=1 00:25:36.140 --rc genhtml_legend=1 00:25:36.140 --rc geninfo_all_blocks=1 00:25:36.140 --rc geninfo_unexecuted_blocks=1 00:25:36.140 00:25:36.140 ' 00:25:36.140 05:05:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:36.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.140 --rc genhtml_branch_coverage=1 00:25:36.140 --rc genhtml_function_coverage=1 00:25:36.140 --rc genhtml_legend=1 00:25:36.140 --rc geninfo_all_blocks=1 00:25:36.140 --rc geninfo_unexecuted_blocks=1 00:25:36.140 00:25:36.140 ' 00:25:36.140 05:05:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:36.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.140 --rc genhtml_branch_coverage=1 00:25:36.140 --rc genhtml_function_coverage=1 00:25:36.140 --rc genhtml_legend=1 00:25:36.140 --rc geninfo_all_blocks=1 00:25:36.140 --rc geninfo_unexecuted_blocks=1 00:25:36.140 00:25:36.140 ' 00:25:36.140 05:05:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:36.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.140 --rc genhtml_branch_coverage=1 00:25:36.140 --rc genhtml_function_coverage=1 00:25:36.140 --rc genhtml_legend=1 00:25:36.140 --rc geninfo_all_blocks=1 00:25:36.140 --rc geninfo_unexecuted_blocks=1 00:25:36.140 00:25:36.140 ' 00:25:36.140 05:05:59 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:36.399 05:05:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.399 05:05:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.399 05:05:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.399 05:05:59 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:36.400 05:05:59 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:36.400 05:05:59 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:36.400 05:05:59 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:36.400 05:05:59 -- paths/export.sh@6 -- # export PATH 00:25:36.400 05:05:59 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:36.400 05:05:59 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:25:36.400 05:05:59 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:25:36.400 05:05:59 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:25:36.400 05:05:59 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:25:36.400 05:05:59 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:36.400 05:05:59 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:36.400 05:05:59 -- dd/posix.sh@130 -- # tests 00:25:36.400 05:05:59 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:25:36.400 * First test run, liburing in use 00:25:36.400 05:05:59 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:25:36.400 05:05:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:36.400 05:05:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:36.400 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:25:36.400 ************************************ 00:25:36.400 START TEST dd_flag_append 00:25:36.400 ************************************ 00:25:36.400 05:05:59 -- common/autotest_common.sh@1114 -- # append 00:25:36.400 05:05:59 -- dd/posix.sh@16 -- # local dump0 00:25:36.400 05:05:59 -- dd/posix.sh@17 -- # local dump1 00:25:36.400 05:05:59 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:36.400 05:05:59 -- dd/common.sh@98 -- # xtrace_disable 00:25:36.400 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:25:36.400 05:05:59 -- dd/posix.sh@19 -- # dump0=c33zzmg1d0som19qhljzz3jotk81171q 00:25:36.400 05:05:59 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:36.400 05:05:59 -- dd/common.sh@98 -- # xtrace_disable 00:25:36.400 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:25:36.400 05:05:59 -- dd/posix.sh@20 -- # dump1=b1nonoe6nz7al4cfb14x00nrfyy15398 00:25:36.400 05:05:59 -- dd/posix.sh@22 -- # printf %s c33zzmg1d0som19qhljzz3jotk81171q 00:25:36.400 05:05:59 -- dd/posix.sh@23 -- # printf %s b1nonoe6nz7al4cfb14x00nrfyy15398 00:25:36.400 05:05:59 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:36.400 [2024-11-18 05:05:59.723325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:36.400 [2024-11-18 05:05:59.723477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88824 ] 00:25:36.400 [2024-11-18 05:05:59.878453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.659 [2024-11-18 05:06:00.033161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.917  [2024-11-18T05:06:01.378Z] Copying: 32/32 [B] (average 31 kBps) 00:25:37.854 00:25:37.854 05:06:01 -- dd/posix.sh@27 -- # [[ b1nonoe6nz7al4cfb14x00nrfyy15398c33zzmg1d0som19qhljzz3jotk81171q == \b\1\n\o\n\o\e\6\n\z\7\a\l\4\c\f\b\1\4\x\0\0\n\r\f\y\y\1\5\3\9\8\c\3\3\z\z\m\g\1\d\0\s\o\m\1\9\q\h\l\j\z\z\3\j\o\t\k\8\1\1\7\1\q ]] 00:25:37.854 00:25:37.854 real 0m1.535s 00:25:37.854 user 0m1.252s 00:25:37.854 sys 0m0.168s 00:25:37.854 ************************************ 00:25:37.854 END TEST dd_flag_append 00:25:37.854 ************************************ 00:25:37.854 05:06:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:37.854 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:25:37.854 05:06:01 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:25:37.854 05:06:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:37.854 05:06:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:37.854 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:25:37.854 ************************************ 00:25:37.854 START TEST dd_flag_directory 00:25:37.854 ************************************ 00:25:37.854 05:06:01 -- common/autotest_common.sh@1114 -- # directory 00:25:37.854 05:06:01 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:37.854 05:06:01 -- common/autotest_common.sh@650 -- # local es=0 00:25:37.854 05:06:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:37.854 05:06:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.854 05:06:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:37.854 05:06:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.854 05:06:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:37.854 05:06:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.854 05:06:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:37.854 05:06:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.854 05:06:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:37.854 05:06:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:37.854 [2024-11-18 05:06:01.326152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:37.854 [2024-11-18 05:06:01.326326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88858 ] 00:25:38.114 [2024-11-18 05:06:01.495698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.373 [2024-11-18 05:06:01.657252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.373 [2024-11-18 05:06:01.877310] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:38.373 [2024-11-18 05:06:01.877388] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:38.373 [2024-11-18 05:06:01.877424] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:39.311 [2024-11-18 05:06:02.478341] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:39.570 05:06:02 -- common/autotest_common.sh@653 -- # es=236 00:25:39.570 05:06:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:39.570 05:06:02 -- common/autotest_common.sh@662 -- # es=108 00:25:39.570 05:06:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:39.570 05:06:02 -- common/autotest_common.sh@670 -- # es=1 00:25:39.570 05:06:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:39.570 05:06:02 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:39.570 05:06:02 -- common/autotest_common.sh@650 -- # local es=0 00:25:39.570 05:06:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:39.570 05:06:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:39.570 05:06:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.570 05:06:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:39.570 05:06:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.570 05:06:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:39.570 05:06:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.570 05:06:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:39.570 05:06:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:39.570 05:06:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:39.570 [2024-11-18 05:06:02.904937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:39.570 [2024-11-18 05:06:02.905095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88880 ] 00:25:39.570 [2024-11-18 05:06:03.075938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.829 [2024-11-18 05:06:03.239176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.088 [2024-11-18 05:06:03.476407] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:40.088 [2024-11-18 05:06:03.476478] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:40.088 [2024-11-18 05:06:03.476496] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:40.657 [2024-11-18 05:06:04.039350] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:40.917 05:06:04 -- common/autotest_common.sh@653 -- # es=236 00:25:40.917 05:06:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:40.917 05:06:04 -- common/autotest_common.sh@662 -- # es=108 00:25:40.917 05:06:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:40.917 05:06:04 -- common/autotest_common.sh@670 -- # es=1 00:25:40.917 05:06:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:40.917 ************************************ 00:25:40.917 END TEST dd_flag_directory 00:25:40.917 ************************************ 00:25:40.917 00:25:40.917 real 0m3.117s 00:25:40.917 user 0m2.540s 00:25:40.917 sys 0m0.374s 00:25:40.917 05:06:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:40.917 05:06:04 -- common/autotest_common.sh@10 -- # set +x 00:25:40.917 05:06:04 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:25:40.917 05:06:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:40.917 05:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:40.917 05:06:04 -- common/autotest_common.sh@10 -- # set +x 00:25:40.917 ************************************ 00:25:40.917 START TEST dd_flag_nofollow 00:25:40.917 ************************************ 00:25:40.917 05:06:04 -- common/autotest_common.sh@1114 -- # nofollow 00:25:40.917 05:06:04 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:40.917 05:06:04 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:40.917 05:06:04 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:40.917 05:06:04 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:40.917 05:06:04 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:40.917 05:06:04 -- common/autotest_common.sh@650 -- # local es=0 00:25:40.917 05:06:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:40.917 05:06:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:40.917 05:06:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.917 05:06:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:40.917 05:06:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.917 05:06:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:40.917 05:06:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.917 05:06:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:40.917 05:06:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:40.917 05:06:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:41.176 [2024-11-18 05:06:04.483001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:41.176 [2024-11-18 05:06:04.483110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88915 ] 00:25:41.176 [2024-11-18 05:06:04.647005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.435 [2024-11-18 05:06:04.802083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.694 [2024-11-18 05:06:05.014592] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:41.694 [2024-11-18 05:06:05.014668] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:41.694 [2024-11-18 05:06:05.014689] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:42.263 [2024-11-18 05:06:05.554541] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:42.522 05:06:05 -- common/autotest_common.sh@653 -- # es=216 00:25:42.522 05:06:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:42.522 05:06:05 -- common/autotest_common.sh@662 -- # es=88 00:25:42.522 05:06:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:42.522 05:06:05 -- common/autotest_common.sh@670 -- # es=1 00:25:42.522 05:06:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:42.522 05:06:05 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:42.522 05:06:05 -- common/autotest_common.sh@650 -- # local es=0 00:25:42.522 05:06:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:42.522 05:06:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.522 05:06:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.522 05:06:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.522 05:06:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.522 05:06:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.522 05:06:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.522 05:06:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.522 05:06:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:42.522 05:06:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:42.522 [2024-11-18 05:06:05.974346] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:42.522 [2024-11-18 05:06:05.974503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88941 ] 00:25:42.782 [2024-11-18 05:06:06.142759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.782 [2024-11-18 05:06:06.294724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.041 [2024-11-18 05:06:06.506450] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:43.041 [2024-11-18 05:06:06.506526] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:43.041 [2024-11-18 05:06:06.506547] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:43.611 [2024-11-18 05:06:07.059940] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:44.179 05:06:07 -- common/autotest_common.sh@653 -- # es=216 00:25:44.179 05:06:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:44.179 05:06:07 -- common/autotest_common.sh@662 -- # es=88 00:25:44.179 05:06:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:44.179 05:06:07 -- common/autotest_common.sh@670 -- # es=1 00:25:44.179 05:06:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:44.179 05:06:07 -- dd/posix.sh@46 -- # gen_bytes 512 00:25:44.179 05:06:07 -- dd/common.sh@98 -- # xtrace_disable 00:25:44.179 05:06:07 -- common/autotest_common.sh@10 -- # set +x 00:25:44.180 05:06:07 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:44.180 [2024-11-18 05:06:07.468569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:44.180 [2024-11-18 05:06:07.469218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88957 ] 00:25:44.180 [2024-11-18 05:06:07.632314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.439 [2024-11-18 05:06:07.786610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.698  [2024-11-18T05:06:09.161Z] Copying: 512/512 [B] (average 500 kBps) 00:25:45.637 00:25:45.637 05:06:08 -- dd/posix.sh@49 -- # [[ 98z2fxcxguauxc3tfm1div3rte821yr2hd7h9dt1j2wzcyq177ls1ovbo68z1ouxukq9v5p63mtqa8bp7wbxqme4elexn2loksd0a1iwtxz4p3zp0u58cmld9kth52b4ibuvxrtsu1myhwphjl4b6fes57wu81l1rnt1ch55jemv2ljzcv6n7pov53rltgx7qf458stp8ned8d0e6pl5aj6i9pyk08phmrqwa02potn9e64psjtw4awbiwsm21h79bzfd9mbvlngotzo6laiv2dpmi24u3xywh1wvkbi4z7vyl5n5lz9y2pb6y7axh00vsk52apx8mo9em7n2oqm4uqb568g8xdhbj8mdvj48nvjoifdi96ll2rhzhciv500208cs49hqgv1o6s81ns18ve7d2lq5hbg5plr84opb4fjy5e1uq8rr0n3ak32frbxy70mhyfx6cp7hl9ebmihtlmpuu7otamji4ln4ttqpvaltyo63trssxd8ah6roa5n == \9\8\z\2\f\x\c\x\g\u\a\u\x\c\3\t\f\m\1\d\i\v\3\r\t\e\8\2\1\y\r\2\h\d\7\h\9\d\t\1\j\2\w\z\c\y\q\1\7\7\l\s\1\o\v\b\o\6\8\z\1\o\u\x\u\k\q\9\v\5\p\6\3\m\t\q\a\8\b\p\7\w\b\x\q\m\e\4\e\l\e\x\n\2\l\o\k\s\d\0\a\1\i\w\t\x\z\4\p\3\z\p\0\u\5\8\c\m\l\d\9\k\t\h\5\2\b\4\i\b\u\v\x\r\t\s\u\1\m\y\h\w\p\h\j\l\4\b\6\f\e\s\5\7\w\u\8\1\l\1\r\n\t\1\c\h\5\5\j\e\m\v\2\l\j\z\c\v\6\n\7\p\o\v\5\3\r\l\t\g\x\7\q\f\4\5\8\s\t\p\8\n\e\d\8\d\0\e\6\p\l\5\a\j\6\i\9\p\y\k\0\8\p\h\m\r\q\w\a\0\2\p\o\t\n\9\e\6\4\p\s\j\t\w\4\a\w\b\i\w\s\m\2\1\h\7\9\b\z\f\d\9\m\b\v\l\n\g\o\t\z\o\6\l\a\i\v\2\d\p\m\i\2\4\u\3\x\y\w\h\1\w\v\k\b\i\4\z\7\v\y\l\5\n\5\l\z\9\y\2\p\b\6\y\7\a\x\h\0\0\v\s\k\5\2\a\p\x\8\m\o\9\e\m\7\n\2\o\q\m\4\u\q\b\5\6\8\g\8\x\d\h\b\j\8\m\d\v\j\4\8\n\v\j\o\i\f\d\i\9\6\l\l\2\r\h\z\h\c\i\v\5\0\0\2\0\8\c\s\4\9\h\q\g\v\1\o\6\s\8\1\n\s\1\8\v\e\7\d\2\l\q\5\h\b\g\5\p\l\r\8\4\o\p\b\4\f\j\y\5\e\1\u\q\8\r\r\0\n\3\a\k\3\2\f\r\b\x\y\7\0\m\h\y\f\x\6\c\p\7\h\l\9\e\b\m\i\h\t\l\m\p\u\u\7\o\t\a\m\j\i\4\l\n\4\t\t\q\p\v\a\l\t\y\o\6\3\t\r\s\s\x\d\8\a\h\6\r\o\a\5\n ]] 00:25:45.637 00:25:45.637 real 0m4.484s 00:25:45.637 user 0m3.631s 00:25:45.637 sys 0m0.534s 00:25:45.637 ************************************ 00:25:45.637 END TEST dd_flag_nofollow 00:25:45.637 ************************************ 00:25:45.637 05:06:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:45.637 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:25:45.637 05:06:08 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:25:45.637 05:06:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:45.637 05:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:45.637 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:25:45.637 ************************************ 00:25:45.637 START TEST dd_flag_noatime 00:25:45.637 ************************************ 00:25:45.637 05:06:08 -- common/autotest_common.sh@1114 -- # noatime 00:25:45.637 05:06:08 -- dd/posix.sh@53 -- # local atime_if 00:25:45.637 05:06:08 -- dd/posix.sh@54 -- # local atime_of 00:25:45.637 05:06:08 -- dd/posix.sh@58 -- # gen_bytes 512 00:25:45.637 05:06:08 -- dd/common.sh@98 -- # xtrace_disable 00:25:45.637 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:25:45.637 05:06:08 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:45.637 05:06:08 -- dd/posix.sh@60 -- # atime_if=1731906367 00:25:45.637 05:06:08 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:45.637 05:06:08 -- dd/posix.sh@61 -- # atime_of=1731906368 00:25:45.637 05:06:08 -- dd/posix.sh@66 -- # sleep 1 00:25:46.590 05:06:09 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:46.590 [2024-11-18 05:06:10.045654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:46.590 [2024-11-18 05:06:10.045842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89010 ] 00:25:46.862 [2024-11-18 05:06:10.215933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.862 [2024-11-18 05:06:10.368059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.120  [2024-11-18T05:06:11.581Z] Copying: 512/512 [B] (average 500 kBps) 00:25:48.057 00:25:48.057 05:06:11 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:48.057 05:06:11 -- dd/posix.sh@69 -- # (( atime_if == 1731906367 )) 00:25:48.057 05:06:11 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:48.057 05:06:11 -- dd/posix.sh@70 -- # (( atime_of == 1731906368 )) 00:25:48.057 05:06:11 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:48.057 [2024-11-18 05:06:11.564748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:48.057 [2024-11-18 05:06:11.564905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89028 ] 00:25:48.316 [2024-11-18 05:06:11.735262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.575 [2024-11-18 05:06:11.895333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.834  [2024-11-18T05:06:13.295Z] Copying: 512/512 [B] (average 500 kBps) 00:25:49.771 00:25:49.771 05:06:13 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:49.771 05:06:13 -- dd/posix.sh@73 -- # (( atime_if < 1731906372 )) 00:25:49.771 00:25:49.771 real 0m4.067s 00:25:49.771 user 0m2.421s 00:25:49.771 sys 0m0.416s 00:25:49.771 05:06:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:49.771 05:06:13 -- common/autotest_common.sh@10 -- # set +x 00:25:49.771 ************************************ 00:25:49.771 END TEST dd_flag_noatime 00:25:49.771 ************************************ 00:25:49.771 05:06:13 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:25:49.771 05:06:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:49.771 05:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:49.771 05:06:13 -- common/autotest_common.sh@10 -- # set +x 00:25:49.771 ************************************ 00:25:49.771 START TEST dd_flags_misc 00:25:49.771 ************************************ 00:25:49.771 05:06:13 -- common/autotest_common.sh@1114 -- # io 00:25:49.771 05:06:13 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:25:49.771 05:06:13 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:25:49.771 05:06:13 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:25:49.771 05:06:13 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:49.771 05:06:13 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:49.771 05:06:13 -- dd/common.sh@98 -- # xtrace_disable 00:25:49.771 05:06:13 -- common/autotest_common.sh@10 -- # set +x 00:25:49.772 05:06:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:49.772 05:06:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:49.772 [2024-11-18 05:06:13.145948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:49.772 [2024-11-18 05:06:13.146104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89067 ] 00:25:50.030 [2024-11-18 05:06:13.315964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.030 [2024-11-18 05:06:13.466243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.288  [2024-11-18T05:06:14.749Z] Copying: 512/512 [B] (average 500 kBps) 00:25:51.225 00:25:51.226 05:06:14 -- dd/posix.sh@93 -- # [[ h9rv9vyc6vvli5aiy7ay14fqm0obwt1zpc82a95ccgxqxjrwd397qxt8n4gkffbisprf1dzahtosatpv2swuvtkanf92bss2tsxn6uiobnlsnnqj7h3y4p7sjmvp9kfjbqzaojy5pr9yza0xus77itwswf07pzuqav2wgftn32gyisfiuvp19pauep4xt2wemzgcpf620vffrk2s82wivi6nof940p97vacjqb06lbgw4mqvajretahsv1u6k42z0es49u71yn3vlz02dbkufkqqs6vu35ggd52f10dvku69vtytvaof1qm7jrcdf10nrcnnc0u6jif5phsjza54myrjyes167xesiceyplji8dcfgnghrrswpf04drwou4n4fr8r8rfjztigqo4aa4my9b72zpoy3fl7et3wc66xjpdsbaibdqlo59uhwx82zzh67nb55v49x6gx9wsp89jvh10q5yflkdz8l53dl497uj39wrfut7jncv173bqic3w == \h\9\r\v\9\v\y\c\6\v\v\l\i\5\a\i\y\7\a\y\1\4\f\q\m\0\o\b\w\t\1\z\p\c\8\2\a\9\5\c\c\g\x\q\x\j\r\w\d\3\9\7\q\x\t\8\n\4\g\k\f\f\b\i\s\p\r\f\1\d\z\a\h\t\o\s\a\t\p\v\2\s\w\u\v\t\k\a\n\f\9\2\b\s\s\2\t\s\x\n\6\u\i\o\b\n\l\s\n\n\q\j\7\h\3\y\4\p\7\s\j\m\v\p\9\k\f\j\b\q\z\a\o\j\y\5\p\r\9\y\z\a\0\x\u\s\7\7\i\t\w\s\w\f\0\7\p\z\u\q\a\v\2\w\g\f\t\n\3\2\g\y\i\s\f\i\u\v\p\1\9\p\a\u\e\p\4\x\t\2\w\e\m\z\g\c\p\f\6\2\0\v\f\f\r\k\2\s\8\2\w\i\v\i\6\n\o\f\9\4\0\p\9\7\v\a\c\j\q\b\0\6\l\b\g\w\4\m\q\v\a\j\r\e\t\a\h\s\v\1\u\6\k\4\2\z\0\e\s\4\9\u\7\1\y\n\3\v\l\z\0\2\d\b\k\u\f\k\q\q\s\6\v\u\3\5\g\g\d\5\2\f\1\0\d\v\k\u\6\9\v\t\y\t\v\a\o\f\1\q\m\7\j\r\c\d\f\1\0\n\r\c\n\n\c\0\u\6\j\i\f\5\p\h\s\j\z\a\5\4\m\y\r\j\y\e\s\1\6\7\x\e\s\i\c\e\y\p\l\j\i\8\d\c\f\g\n\g\h\r\r\s\w\p\f\0\4\d\r\w\o\u\4\n\4\f\r\8\r\8\r\f\j\z\t\i\g\q\o\4\a\a\4\m\y\9\b\7\2\z\p\o\y\3\f\l\7\e\t\3\w\c\6\6\x\j\p\d\s\b\a\i\b\d\q\l\o\5\9\u\h\w\x\8\2\z\z\h\6\7\n\b\5\5\v\4\9\x\6\g\x\9\w\s\p\8\9\j\v\h\1\0\q\5\y\f\l\k\d\z\8\l\5\3\d\l\4\9\7\u\j\3\9\w\r\f\u\t\7\j\n\c\v\1\7\3\b\q\i\c\3\w ]] 00:25:51.226 05:06:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:51.226 05:06:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:51.226 [2024-11-18 05:06:14.650767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:51.226 [2024-11-18 05:06:14.651571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89085 ] 00:25:51.484 [2024-11-18 05:06:14.823101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.484 [2024-11-18 05:06:14.976671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.743  [2024-11-18T05:06:16.204Z] Copying: 512/512 [B] (average 500 kBps) 00:25:52.680 00:25:52.680 05:06:16 -- dd/posix.sh@93 -- # [[ h9rv9vyc6vvli5aiy7ay14fqm0obwt1zpc82a95ccgxqxjrwd397qxt8n4gkffbisprf1dzahtosatpv2swuvtkanf92bss2tsxn6uiobnlsnnqj7h3y4p7sjmvp9kfjbqzaojy5pr9yza0xus77itwswf07pzuqav2wgftn32gyisfiuvp19pauep4xt2wemzgcpf620vffrk2s82wivi6nof940p97vacjqb06lbgw4mqvajretahsv1u6k42z0es49u71yn3vlz02dbkufkqqs6vu35ggd52f10dvku69vtytvaof1qm7jrcdf10nrcnnc0u6jif5phsjza54myrjyes167xesiceyplji8dcfgnghrrswpf04drwou4n4fr8r8rfjztigqo4aa4my9b72zpoy3fl7et3wc66xjpdsbaibdqlo59uhwx82zzh67nb55v49x6gx9wsp89jvh10q5yflkdz8l53dl497uj39wrfut7jncv173bqic3w == \h\9\r\v\9\v\y\c\6\v\v\l\i\5\a\i\y\7\a\y\1\4\f\q\m\0\o\b\w\t\1\z\p\c\8\2\a\9\5\c\c\g\x\q\x\j\r\w\d\3\9\7\q\x\t\8\n\4\g\k\f\f\b\i\s\p\r\f\1\d\z\a\h\t\o\s\a\t\p\v\2\s\w\u\v\t\k\a\n\f\9\2\b\s\s\2\t\s\x\n\6\u\i\o\b\n\l\s\n\n\q\j\7\h\3\y\4\p\7\s\j\m\v\p\9\k\f\j\b\q\z\a\o\j\y\5\p\r\9\y\z\a\0\x\u\s\7\7\i\t\w\s\w\f\0\7\p\z\u\q\a\v\2\w\g\f\t\n\3\2\g\y\i\s\f\i\u\v\p\1\9\p\a\u\e\p\4\x\t\2\w\e\m\z\g\c\p\f\6\2\0\v\f\f\r\k\2\s\8\2\w\i\v\i\6\n\o\f\9\4\0\p\9\7\v\a\c\j\q\b\0\6\l\b\g\w\4\m\q\v\a\j\r\e\t\a\h\s\v\1\u\6\k\4\2\z\0\e\s\4\9\u\7\1\y\n\3\v\l\z\0\2\d\b\k\u\f\k\q\q\s\6\v\u\3\5\g\g\d\5\2\f\1\0\d\v\k\u\6\9\v\t\y\t\v\a\o\f\1\q\m\7\j\r\c\d\f\1\0\n\r\c\n\n\c\0\u\6\j\i\f\5\p\h\s\j\z\a\5\4\m\y\r\j\y\e\s\1\6\7\x\e\s\i\c\e\y\p\l\j\i\8\d\c\f\g\n\g\h\r\r\s\w\p\f\0\4\d\r\w\o\u\4\n\4\f\r\8\r\8\r\f\j\z\t\i\g\q\o\4\a\a\4\m\y\9\b\7\2\z\p\o\y\3\f\l\7\e\t\3\w\c\6\6\x\j\p\d\s\b\a\i\b\d\q\l\o\5\9\u\h\w\x\8\2\z\z\h\6\7\n\b\5\5\v\4\9\x\6\g\x\9\w\s\p\8\9\j\v\h\1\0\q\5\y\f\l\k\d\z\8\l\5\3\d\l\4\9\7\u\j\3\9\w\r\f\u\t\7\j\n\c\v\1\7\3\b\q\i\c\3\w ]] 00:25:52.680 05:06:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:52.680 05:06:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:52.680 [2024-11-18 05:06:16.152806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:52.680 [2024-11-18 05:06:16.152958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89106 ] 00:25:52.939 [2024-11-18 05:06:16.321579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.198 [2024-11-18 05:06:16.471323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.198  [2024-11-18T05:06:17.659Z] Copying: 512/512 [B] (average 100 kBps) 00:25:54.135 00:25:54.135 05:06:17 -- dd/posix.sh@93 -- # [[ h9rv9vyc6vvli5aiy7ay14fqm0obwt1zpc82a95ccgxqxjrwd397qxt8n4gkffbisprf1dzahtosatpv2swuvtkanf92bss2tsxn6uiobnlsnnqj7h3y4p7sjmvp9kfjbqzaojy5pr9yza0xus77itwswf07pzuqav2wgftn32gyisfiuvp19pauep4xt2wemzgcpf620vffrk2s82wivi6nof940p97vacjqb06lbgw4mqvajretahsv1u6k42z0es49u71yn3vlz02dbkufkqqs6vu35ggd52f10dvku69vtytvaof1qm7jrcdf10nrcnnc0u6jif5phsjza54myrjyes167xesiceyplji8dcfgnghrrswpf04drwou4n4fr8r8rfjztigqo4aa4my9b72zpoy3fl7et3wc66xjpdsbaibdqlo59uhwx82zzh67nb55v49x6gx9wsp89jvh10q5yflkdz8l53dl497uj39wrfut7jncv173bqic3w == \h\9\r\v\9\v\y\c\6\v\v\l\i\5\a\i\y\7\a\y\1\4\f\q\m\0\o\b\w\t\1\z\p\c\8\2\a\9\5\c\c\g\x\q\x\j\r\w\d\3\9\7\q\x\t\8\n\4\g\k\f\f\b\i\s\p\r\f\1\d\z\a\h\t\o\s\a\t\p\v\2\s\w\u\v\t\k\a\n\f\9\2\b\s\s\2\t\s\x\n\6\u\i\o\b\n\l\s\n\n\q\j\7\h\3\y\4\p\7\s\j\m\v\p\9\k\f\j\b\q\z\a\o\j\y\5\p\r\9\y\z\a\0\x\u\s\7\7\i\t\w\s\w\f\0\7\p\z\u\q\a\v\2\w\g\f\t\n\3\2\g\y\i\s\f\i\u\v\p\1\9\p\a\u\e\p\4\x\t\2\w\e\m\z\g\c\p\f\6\2\0\v\f\f\r\k\2\s\8\2\w\i\v\i\6\n\o\f\9\4\0\p\9\7\v\a\c\j\q\b\0\6\l\b\g\w\4\m\q\v\a\j\r\e\t\a\h\s\v\1\u\6\k\4\2\z\0\e\s\4\9\u\7\1\y\n\3\v\l\z\0\2\d\b\k\u\f\k\q\q\s\6\v\u\3\5\g\g\d\5\2\f\1\0\d\v\k\u\6\9\v\t\y\t\v\a\o\f\1\q\m\7\j\r\c\d\f\1\0\n\r\c\n\n\c\0\u\6\j\i\f\5\p\h\s\j\z\a\5\4\m\y\r\j\y\e\s\1\6\7\x\e\s\i\c\e\y\p\l\j\i\8\d\c\f\g\n\g\h\r\r\s\w\p\f\0\4\d\r\w\o\u\4\n\4\f\r\8\r\8\r\f\j\z\t\i\g\q\o\4\a\a\4\m\y\9\b\7\2\z\p\o\y\3\f\l\7\e\t\3\w\c\6\6\x\j\p\d\s\b\a\i\b\d\q\l\o\5\9\u\h\w\x\8\2\z\z\h\6\7\n\b\5\5\v\4\9\x\6\g\x\9\w\s\p\8\9\j\v\h\1\0\q\5\y\f\l\k\d\z\8\l\5\3\d\l\4\9\7\u\j\3\9\w\r\f\u\t\7\j\n\c\v\1\7\3\b\q\i\c\3\w ]] 00:25:54.135 05:06:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:54.135 05:06:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:54.135 [2024-11-18 05:06:17.652704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:54.135 [2024-11-18 05:06:17.652926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89120 ] 00:25:54.393 [2024-11-18 05:06:17.818688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.652 [2024-11-18 05:06:17.971505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.910  [2024-11-18T05:06:19.372Z] Copying: 512/512 [B] (average 125 kBps) 00:25:55.848 00:25:55.848 05:06:19 -- dd/posix.sh@93 -- # [[ h9rv9vyc6vvli5aiy7ay14fqm0obwt1zpc82a95ccgxqxjrwd397qxt8n4gkffbisprf1dzahtosatpv2swuvtkanf92bss2tsxn6uiobnlsnnqj7h3y4p7sjmvp9kfjbqzaojy5pr9yza0xus77itwswf07pzuqav2wgftn32gyisfiuvp19pauep4xt2wemzgcpf620vffrk2s82wivi6nof940p97vacjqb06lbgw4mqvajretahsv1u6k42z0es49u71yn3vlz02dbkufkqqs6vu35ggd52f10dvku69vtytvaof1qm7jrcdf10nrcnnc0u6jif5phsjza54myrjyes167xesiceyplji8dcfgnghrrswpf04drwou4n4fr8r8rfjztigqo4aa4my9b72zpoy3fl7et3wc66xjpdsbaibdqlo59uhwx82zzh67nb55v49x6gx9wsp89jvh10q5yflkdz8l53dl497uj39wrfut7jncv173bqic3w == \h\9\r\v\9\v\y\c\6\v\v\l\i\5\a\i\y\7\a\y\1\4\f\q\m\0\o\b\w\t\1\z\p\c\8\2\a\9\5\c\c\g\x\q\x\j\r\w\d\3\9\7\q\x\t\8\n\4\g\k\f\f\b\i\s\p\r\f\1\d\z\a\h\t\o\s\a\t\p\v\2\s\w\u\v\t\k\a\n\f\9\2\b\s\s\2\t\s\x\n\6\u\i\o\b\n\l\s\n\n\q\j\7\h\3\y\4\p\7\s\j\m\v\p\9\k\f\j\b\q\z\a\o\j\y\5\p\r\9\y\z\a\0\x\u\s\7\7\i\t\w\s\w\f\0\7\p\z\u\q\a\v\2\w\g\f\t\n\3\2\g\y\i\s\f\i\u\v\p\1\9\p\a\u\e\p\4\x\t\2\w\e\m\z\g\c\p\f\6\2\0\v\f\f\r\k\2\s\8\2\w\i\v\i\6\n\o\f\9\4\0\p\9\7\v\a\c\j\q\b\0\6\l\b\g\w\4\m\q\v\a\j\r\e\t\a\h\s\v\1\u\6\k\4\2\z\0\e\s\4\9\u\7\1\y\n\3\v\l\z\0\2\d\b\k\u\f\k\q\q\s\6\v\u\3\5\g\g\d\5\2\f\1\0\d\v\k\u\6\9\v\t\y\t\v\a\o\f\1\q\m\7\j\r\c\d\f\1\0\n\r\c\n\n\c\0\u\6\j\i\f\5\p\h\s\j\z\a\5\4\m\y\r\j\y\e\s\1\6\7\x\e\s\i\c\e\y\p\l\j\i\8\d\c\f\g\n\g\h\r\r\s\w\p\f\0\4\d\r\w\o\u\4\n\4\f\r\8\r\8\r\f\j\z\t\i\g\q\o\4\a\a\4\m\y\9\b\7\2\z\p\o\y\3\f\l\7\e\t\3\w\c\6\6\x\j\p\d\s\b\a\i\b\d\q\l\o\5\9\u\h\w\x\8\2\z\z\h\6\7\n\b\5\5\v\4\9\x\6\g\x\9\w\s\p\8\9\j\v\h\1\0\q\5\y\f\l\k\d\z\8\l\5\3\d\l\4\9\7\u\j\3\9\w\r\f\u\t\7\j\n\c\v\1\7\3\b\q\i\c\3\w ]] 00:25:55.848 05:06:19 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:55.848 05:06:19 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:55.848 05:06:19 -- dd/common.sh@98 -- # xtrace_disable 00:25:55.848 05:06:19 -- common/autotest_common.sh@10 -- # set +x 00:25:55.848 05:06:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:55.848 05:06:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:55.848 [2024-11-18 05:06:19.154374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:55.848 [2024-11-18 05:06:19.154538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89140 ] 00:25:55.848 [2024-11-18 05:06:19.322567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.106 [2024-11-18 05:06:19.476869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.365  [2024-11-18T05:06:20.827Z] Copying: 512/512 [B] (average 500 kBps) 00:25:57.303 00:25:57.303 05:06:20 -- dd/posix.sh@93 -- # [[ ou0jdd6m45pvqrvlzdszydb6d5cm2bwgm48f4hn1xe1ev3s353g37xhwspxiqe4cwnr4tz1pcwa3i5rdpvaphtgngvb7vvlwqj2evre36s1hk07ofcaj7mz4i0576tvif408pn0cy4s3442z03kjzmkebl0u7tb6nja759adoqgm785t0ugwwakr4ayjoqz6zjug8emlec79i7grp522nbi2avvycp2btgbqe33do6ho3ngpacyldp2zj0ed93mlpcf46swe8jltiy05skkm4rytzu2nplu2thmy01evx6s8prjwva787yh77ut1x59eswaun4fgt91doh98znifmhkgvjnop2chx9dj1m9mm9q4tvehojtexhv83b7hjsfz98nr1c81gifmljxmddcp6hnd9exswvmwe0ytdg36tr6r9bv42iy2v981ixse5o6lrzc0qnbdl6ukctd86woieo4ygp58lb6zvwnfdivncntrdzvbyqufuy1khrjbtyyr == \o\u\0\j\d\d\6\m\4\5\p\v\q\r\v\l\z\d\s\z\y\d\b\6\d\5\c\m\2\b\w\g\m\4\8\f\4\h\n\1\x\e\1\e\v\3\s\3\5\3\g\3\7\x\h\w\s\p\x\i\q\e\4\c\w\n\r\4\t\z\1\p\c\w\a\3\i\5\r\d\p\v\a\p\h\t\g\n\g\v\b\7\v\v\l\w\q\j\2\e\v\r\e\3\6\s\1\h\k\0\7\o\f\c\a\j\7\m\z\4\i\0\5\7\6\t\v\i\f\4\0\8\p\n\0\c\y\4\s\3\4\4\2\z\0\3\k\j\z\m\k\e\b\l\0\u\7\t\b\6\n\j\a\7\5\9\a\d\o\q\g\m\7\8\5\t\0\u\g\w\w\a\k\r\4\a\y\j\o\q\z\6\z\j\u\g\8\e\m\l\e\c\7\9\i\7\g\r\p\5\2\2\n\b\i\2\a\v\v\y\c\p\2\b\t\g\b\q\e\3\3\d\o\6\h\o\3\n\g\p\a\c\y\l\d\p\2\z\j\0\e\d\9\3\m\l\p\c\f\4\6\s\w\e\8\j\l\t\i\y\0\5\s\k\k\m\4\r\y\t\z\u\2\n\p\l\u\2\t\h\m\y\0\1\e\v\x\6\s\8\p\r\j\w\v\a\7\8\7\y\h\7\7\u\t\1\x\5\9\e\s\w\a\u\n\4\f\g\t\9\1\d\o\h\9\8\z\n\i\f\m\h\k\g\v\j\n\o\p\2\c\h\x\9\d\j\1\m\9\m\m\9\q\4\t\v\e\h\o\j\t\e\x\h\v\8\3\b\7\h\j\s\f\z\9\8\n\r\1\c\8\1\g\i\f\m\l\j\x\m\d\d\c\p\6\h\n\d\9\e\x\s\w\v\m\w\e\0\y\t\d\g\3\6\t\r\6\r\9\b\v\4\2\i\y\2\v\9\8\1\i\x\s\e\5\o\6\l\r\z\c\0\q\n\b\d\l\6\u\k\c\t\d\8\6\w\o\i\e\o\4\y\g\p\5\8\l\b\6\z\v\w\n\f\d\i\v\n\c\n\t\r\d\z\v\b\y\q\u\f\u\y\1\k\h\r\j\b\t\y\y\r ]] 00:25:57.303 05:06:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:57.303 05:06:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:57.303 [2024-11-18 05:06:20.665410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:57.303 [2024-11-18 05:06:20.665581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89158 ] 00:25:57.562 [2024-11-18 05:06:20.835144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.562 [2024-11-18 05:06:20.981205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.822  [2024-11-18T05:06:22.285Z] Copying: 512/512 [B] (average 500 kBps) 00:25:58.761 00:25:58.761 05:06:22 -- dd/posix.sh@93 -- # [[ ou0jdd6m45pvqrvlzdszydb6d5cm2bwgm48f4hn1xe1ev3s353g37xhwspxiqe4cwnr4tz1pcwa3i5rdpvaphtgngvb7vvlwqj2evre36s1hk07ofcaj7mz4i0576tvif408pn0cy4s3442z03kjzmkebl0u7tb6nja759adoqgm785t0ugwwakr4ayjoqz6zjug8emlec79i7grp522nbi2avvycp2btgbqe33do6ho3ngpacyldp2zj0ed93mlpcf46swe8jltiy05skkm4rytzu2nplu2thmy01evx6s8prjwva787yh77ut1x59eswaun4fgt91doh98znifmhkgvjnop2chx9dj1m9mm9q4tvehojtexhv83b7hjsfz98nr1c81gifmljxmddcp6hnd9exswvmwe0ytdg36tr6r9bv42iy2v981ixse5o6lrzc0qnbdl6ukctd86woieo4ygp58lb6zvwnfdivncntrdzvbyqufuy1khrjbtyyr == \o\u\0\j\d\d\6\m\4\5\p\v\q\r\v\l\z\d\s\z\y\d\b\6\d\5\c\m\2\b\w\g\m\4\8\f\4\h\n\1\x\e\1\e\v\3\s\3\5\3\g\3\7\x\h\w\s\p\x\i\q\e\4\c\w\n\r\4\t\z\1\p\c\w\a\3\i\5\r\d\p\v\a\p\h\t\g\n\g\v\b\7\v\v\l\w\q\j\2\e\v\r\e\3\6\s\1\h\k\0\7\o\f\c\a\j\7\m\z\4\i\0\5\7\6\t\v\i\f\4\0\8\p\n\0\c\y\4\s\3\4\4\2\z\0\3\k\j\z\m\k\e\b\l\0\u\7\t\b\6\n\j\a\7\5\9\a\d\o\q\g\m\7\8\5\t\0\u\g\w\w\a\k\r\4\a\y\j\o\q\z\6\z\j\u\g\8\e\m\l\e\c\7\9\i\7\g\r\p\5\2\2\n\b\i\2\a\v\v\y\c\p\2\b\t\g\b\q\e\3\3\d\o\6\h\o\3\n\g\p\a\c\y\l\d\p\2\z\j\0\e\d\9\3\m\l\p\c\f\4\6\s\w\e\8\j\l\t\i\y\0\5\s\k\k\m\4\r\y\t\z\u\2\n\p\l\u\2\t\h\m\y\0\1\e\v\x\6\s\8\p\r\j\w\v\a\7\8\7\y\h\7\7\u\t\1\x\5\9\e\s\w\a\u\n\4\f\g\t\9\1\d\o\h\9\8\z\n\i\f\m\h\k\g\v\j\n\o\p\2\c\h\x\9\d\j\1\m\9\m\m\9\q\4\t\v\e\h\o\j\t\e\x\h\v\8\3\b\7\h\j\s\f\z\9\8\n\r\1\c\8\1\g\i\f\m\l\j\x\m\d\d\c\p\6\h\n\d\9\e\x\s\w\v\m\w\e\0\y\t\d\g\3\6\t\r\6\r\9\b\v\4\2\i\y\2\v\9\8\1\i\x\s\e\5\o\6\l\r\z\c\0\q\n\b\d\l\6\u\k\c\t\d\8\6\w\o\i\e\o\4\y\g\p\5\8\l\b\6\z\v\w\n\f\d\i\v\n\c\n\t\r\d\z\v\b\y\q\u\f\u\y\1\k\h\r\j\b\t\y\y\r ]] 00:25:58.761 05:06:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:58.761 05:06:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:58.761 [2024-11-18 05:06:22.170620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:58.761 [2024-11-18 05:06:22.170776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89179 ] 00:25:59.021 [2024-11-18 05:06:22.340126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.021 [2024-11-18 05:06:22.489393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.280  [2024-11-18T05:06:23.742Z] Copying: 512/512 [B] (average 100 kBps) 00:26:00.218 00:26:00.218 05:06:23 -- dd/posix.sh@93 -- # [[ ou0jdd6m45pvqrvlzdszydb6d5cm2bwgm48f4hn1xe1ev3s353g37xhwspxiqe4cwnr4tz1pcwa3i5rdpvaphtgngvb7vvlwqj2evre36s1hk07ofcaj7mz4i0576tvif408pn0cy4s3442z03kjzmkebl0u7tb6nja759adoqgm785t0ugwwakr4ayjoqz6zjug8emlec79i7grp522nbi2avvycp2btgbqe33do6ho3ngpacyldp2zj0ed93mlpcf46swe8jltiy05skkm4rytzu2nplu2thmy01evx6s8prjwva787yh77ut1x59eswaun4fgt91doh98znifmhkgvjnop2chx9dj1m9mm9q4tvehojtexhv83b7hjsfz98nr1c81gifmljxmddcp6hnd9exswvmwe0ytdg36tr6r9bv42iy2v981ixse5o6lrzc0qnbdl6ukctd86woieo4ygp58lb6zvwnfdivncntrdzvbyqufuy1khrjbtyyr == \o\u\0\j\d\d\6\m\4\5\p\v\q\r\v\l\z\d\s\z\y\d\b\6\d\5\c\m\2\b\w\g\m\4\8\f\4\h\n\1\x\e\1\e\v\3\s\3\5\3\g\3\7\x\h\w\s\p\x\i\q\e\4\c\w\n\r\4\t\z\1\p\c\w\a\3\i\5\r\d\p\v\a\p\h\t\g\n\g\v\b\7\v\v\l\w\q\j\2\e\v\r\e\3\6\s\1\h\k\0\7\o\f\c\a\j\7\m\z\4\i\0\5\7\6\t\v\i\f\4\0\8\p\n\0\c\y\4\s\3\4\4\2\z\0\3\k\j\z\m\k\e\b\l\0\u\7\t\b\6\n\j\a\7\5\9\a\d\o\q\g\m\7\8\5\t\0\u\g\w\w\a\k\r\4\a\y\j\o\q\z\6\z\j\u\g\8\e\m\l\e\c\7\9\i\7\g\r\p\5\2\2\n\b\i\2\a\v\v\y\c\p\2\b\t\g\b\q\e\3\3\d\o\6\h\o\3\n\g\p\a\c\y\l\d\p\2\z\j\0\e\d\9\3\m\l\p\c\f\4\6\s\w\e\8\j\l\t\i\y\0\5\s\k\k\m\4\r\y\t\z\u\2\n\p\l\u\2\t\h\m\y\0\1\e\v\x\6\s\8\p\r\j\w\v\a\7\8\7\y\h\7\7\u\t\1\x\5\9\e\s\w\a\u\n\4\f\g\t\9\1\d\o\h\9\8\z\n\i\f\m\h\k\g\v\j\n\o\p\2\c\h\x\9\d\j\1\m\9\m\m\9\q\4\t\v\e\h\o\j\t\e\x\h\v\8\3\b\7\h\j\s\f\z\9\8\n\r\1\c\8\1\g\i\f\m\l\j\x\m\d\d\c\p\6\h\n\d\9\e\x\s\w\v\m\w\e\0\y\t\d\g\3\6\t\r\6\r\9\b\v\4\2\i\y\2\v\9\8\1\i\x\s\e\5\o\6\l\r\z\c\0\q\n\b\d\l\6\u\k\c\t\d\8\6\w\o\i\e\o\4\y\g\p\5\8\l\b\6\z\v\w\n\f\d\i\v\n\c\n\t\r\d\z\v\b\y\q\u\f\u\y\1\k\h\r\j\b\t\y\y\r ]] 00:26:00.218 05:06:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:00.218 05:06:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:00.218 [2024-11-18 05:06:23.677596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:00.218 [2024-11-18 05:06:23.677770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89193 ] 00:26:00.478 [2024-11-18 05:06:23.838488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.478 [2024-11-18 05:06:23.987886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.737  [2024-11-18T05:06:25.201Z] Copying: 512/512 [B] (average 100 kBps) 00:26:01.677 00:26:01.677 05:06:25 -- dd/posix.sh@93 -- # [[ ou0jdd6m45pvqrvlzdszydb6d5cm2bwgm48f4hn1xe1ev3s353g37xhwspxiqe4cwnr4tz1pcwa3i5rdpvaphtgngvb7vvlwqj2evre36s1hk07ofcaj7mz4i0576tvif408pn0cy4s3442z03kjzmkebl0u7tb6nja759adoqgm785t0ugwwakr4ayjoqz6zjug8emlec79i7grp522nbi2avvycp2btgbqe33do6ho3ngpacyldp2zj0ed93mlpcf46swe8jltiy05skkm4rytzu2nplu2thmy01evx6s8prjwva787yh77ut1x59eswaun4fgt91doh98znifmhkgvjnop2chx9dj1m9mm9q4tvehojtexhv83b7hjsfz98nr1c81gifmljxmddcp6hnd9exswvmwe0ytdg36tr6r9bv42iy2v981ixse5o6lrzc0qnbdl6ukctd86woieo4ygp58lb6zvwnfdivncntrdzvbyqufuy1khrjbtyyr == \o\u\0\j\d\d\6\m\4\5\p\v\q\r\v\l\z\d\s\z\y\d\b\6\d\5\c\m\2\b\w\g\m\4\8\f\4\h\n\1\x\e\1\e\v\3\s\3\5\3\g\3\7\x\h\w\s\p\x\i\q\e\4\c\w\n\r\4\t\z\1\p\c\w\a\3\i\5\r\d\p\v\a\p\h\t\g\n\g\v\b\7\v\v\l\w\q\j\2\e\v\r\e\3\6\s\1\h\k\0\7\o\f\c\a\j\7\m\z\4\i\0\5\7\6\t\v\i\f\4\0\8\p\n\0\c\y\4\s\3\4\4\2\z\0\3\k\j\z\m\k\e\b\l\0\u\7\t\b\6\n\j\a\7\5\9\a\d\o\q\g\m\7\8\5\t\0\u\g\w\w\a\k\r\4\a\y\j\o\q\z\6\z\j\u\g\8\e\m\l\e\c\7\9\i\7\g\r\p\5\2\2\n\b\i\2\a\v\v\y\c\p\2\b\t\g\b\q\e\3\3\d\o\6\h\o\3\n\g\p\a\c\y\l\d\p\2\z\j\0\e\d\9\3\m\l\p\c\f\4\6\s\w\e\8\j\l\t\i\y\0\5\s\k\k\m\4\r\y\t\z\u\2\n\p\l\u\2\t\h\m\y\0\1\e\v\x\6\s\8\p\r\j\w\v\a\7\8\7\y\h\7\7\u\t\1\x\5\9\e\s\w\a\u\n\4\f\g\t\9\1\d\o\h\9\8\z\n\i\f\m\h\k\g\v\j\n\o\p\2\c\h\x\9\d\j\1\m\9\m\m\9\q\4\t\v\e\h\o\j\t\e\x\h\v\8\3\b\7\h\j\s\f\z\9\8\n\r\1\c\8\1\g\i\f\m\l\j\x\m\d\d\c\p\6\h\n\d\9\e\x\s\w\v\m\w\e\0\y\t\d\g\3\6\t\r\6\r\9\b\v\4\2\i\y\2\v\9\8\1\i\x\s\e\5\o\6\l\r\z\c\0\q\n\b\d\l\6\u\k\c\t\d\8\6\w\o\i\e\o\4\y\g\p\5\8\l\b\6\z\v\w\n\f\d\i\v\n\c\n\t\r\d\z\v\b\y\q\u\f\u\y\1\k\h\r\j\b\t\y\y\r ]] 00:26:01.677 00:26:01.677 real 0m12.040s 00:26:01.677 user 0m9.632s 00:26:01.677 sys 0m1.464s 00:26:01.677 05:06:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:01.677 05:06:25 -- common/autotest_common.sh@10 -- # set +x 00:26:01.677 ************************************ 00:26:01.677 END TEST dd_flags_misc 00:26:01.677 ************************************ 00:26:01.677 05:06:25 -- dd/posix.sh@131 -- # tests_forced_aio 00:26:01.677 05:06:25 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:26:01.677 * Second test run, disabling liburing, forcing AIO 00:26:01.677 05:06:25 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:26:01.677 05:06:25 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:26:01.677 05:06:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:01.677 05:06:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:01.677 05:06:25 -- common/autotest_common.sh@10 -- # set +x 00:26:01.677 ************************************ 00:26:01.677 START TEST dd_flag_append_forced_aio 00:26:01.677 ************************************ 00:26:01.677 05:06:25 -- common/autotest_common.sh@1114 -- # append 00:26:01.677 05:06:25 -- dd/posix.sh@16 -- # local dump0 00:26:01.677 05:06:25 -- dd/posix.sh@17 -- # local dump1 00:26:01.677 05:06:25 -- dd/posix.sh@19 -- # gen_bytes 32 00:26:01.677 05:06:25 -- dd/common.sh@98 -- # xtrace_disable 00:26:01.677 05:06:25 -- common/autotest_common.sh@10 -- # set +x 00:26:01.677 05:06:25 -- dd/posix.sh@19 -- # dump0=z1tyrjco0t2954vbe3tk4h1mok7p5zxe 00:26:01.677 05:06:25 -- dd/posix.sh@20 -- # gen_bytes 32 00:26:01.677 05:06:25 -- dd/common.sh@98 -- # xtrace_disable 00:26:01.677 05:06:25 -- common/autotest_common.sh@10 -- # set +x 00:26:01.677 05:06:25 -- dd/posix.sh@20 -- # dump1=nbyb0v8e566f0whjn29d7sd5c0bjl4wv 00:26:01.677 05:06:25 -- dd/posix.sh@22 -- # printf %s z1tyrjco0t2954vbe3tk4h1mok7p5zxe 00:26:01.677 05:06:25 -- dd/posix.sh@23 -- # printf %s nbyb0v8e566f0whjn29d7sd5c0bjl4wv 00:26:01.677 05:06:25 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:26:01.937 [2024-11-18 05:06:25.243320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:01.937 [2024-11-18 05:06:25.243480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89232 ] 00:26:01.937 [2024-11-18 05:06:25.411090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.196 [2024-11-18 05:06:25.569344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.455  [2024-11-18T05:06:26.917Z] Copying: 32/32 [B] (average 31 kBps) 00:26:03.393 00:26:03.393 05:06:26 -- dd/posix.sh@27 -- # [[ nbyb0v8e566f0whjn29d7sd5c0bjl4wvz1tyrjco0t2954vbe3tk4h1mok7p5zxe == \n\b\y\b\0\v\8\e\5\6\6\f\0\w\h\j\n\2\9\d\7\s\d\5\c\0\b\j\l\4\w\v\z\1\t\y\r\j\c\o\0\t\2\9\5\4\v\b\e\3\t\k\4\h\1\m\o\k\7\p\5\z\x\e ]] 00:26:03.393 00:26:03.393 real 0m1.517s 00:26:03.393 user 0m1.210s 00:26:03.393 sys 0m0.192s 00:26:03.393 ************************************ 00:26:03.393 END TEST dd_flag_append_forced_aio 00:26:03.393 ************************************ 00:26:03.393 05:06:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:03.393 05:06:26 -- common/autotest_common.sh@10 -- # set +x 00:26:03.393 05:06:26 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:26:03.393 05:06:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:03.393 05:06:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:03.393 05:06:26 -- common/autotest_common.sh@10 -- # set +x 00:26:03.393 ************************************ 00:26:03.393 START TEST dd_flag_directory_forced_aio 00:26:03.393 ************************************ 00:26:03.393 05:06:26 -- common/autotest_common.sh@1114 -- # directory 00:26:03.393 05:06:26 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:03.393 05:06:26 -- common/autotest_common.sh@650 -- # local es=0 00:26:03.393 05:06:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:03.393 05:06:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:03.393 05:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.393 05:06:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:03.393 05:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.393 05:06:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:03.393 05:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:03.393 05:06:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:03.393 05:06:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:03.393 05:06:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:03.393 [2024-11-18 05:06:26.800476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:03.393 [2024-11-18 05:06:26.800647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89269 ] 00:26:03.652 [2024-11-18 05:06:26.969364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.652 [2024-11-18 05:06:27.114812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.911 [2024-11-18 05:06:27.331808] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:03.911 [2024-11-18 05:06:27.331882] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:03.911 [2024-11-18 05:06:27.331900] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:04.480 [2024-11-18 05:06:27.872837] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:04.740 05:06:28 -- common/autotest_common.sh@653 -- # es=236 00:26:04.740 05:06:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:04.740 05:06:28 -- common/autotest_common.sh@662 -- # es=108 00:26:04.740 05:06:28 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:04.740 05:06:28 -- common/autotest_common.sh@670 -- # es=1 00:26:04.740 05:06:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:04.740 05:06:28 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:04.740 05:06:28 -- common/autotest_common.sh@650 -- # local es=0 00:26:04.740 05:06:28 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:04.740 05:06:28 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:04.740 05:06:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:04.740 05:06:28 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:04.740 05:06:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:04.740 05:06:28 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:04.740 05:06:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:04.740 05:06:28 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:04.740 05:06:28 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:04.740 05:06:28 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:05.000 [2024-11-18 05:06:28.290635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:05.000 [2024-11-18 05:06:28.290793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89292 ] 00:26:05.000 [2024-11-18 05:06:28.458117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.259 [2024-11-18 05:06:28.605789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.518 [2024-11-18 05:06:28.818054] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:05.518 [2024-11-18 05:06:28.818129] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:05.518 [2024-11-18 05:06:28.818147] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:06.087 [2024-11-18 05:06:29.375219] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:06.346 05:06:29 -- common/autotest_common.sh@653 -- # es=236 00:26:06.346 05:06:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:06.346 05:06:29 -- common/autotest_common.sh@662 -- # es=108 00:26:06.346 05:06:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:06.346 05:06:29 -- common/autotest_common.sh@670 -- # es=1 00:26:06.346 05:06:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:06.346 00:26:06.346 real 0m2.979s 00:26:06.346 user 0m2.386s 00:26:06.346 sys 0m0.392s 00:26:06.346 05:06:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:06.346 ************************************ 00:26:06.346 END TEST dd_flag_directory_forced_aio 00:26:06.346 ************************************ 00:26:06.346 05:06:29 -- common/autotest_common.sh@10 -- # set +x 00:26:06.346 05:06:29 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:26:06.346 05:06:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:06.346 05:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:06.346 05:06:29 -- common/autotest_common.sh@10 -- # set +x 00:26:06.346 ************************************ 00:26:06.346 START TEST dd_flag_nofollow_forced_aio 00:26:06.346 ************************************ 00:26:06.346 05:06:29 -- common/autotest_common.sh@1114 -- # nofollow 00:26:06.346 05:06:29 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:06.346 05:06:29 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:06.346 05:06:29 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:06.347 05:06:29 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:06.347 05:06:29 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:06.347 05:06:29 -- common/autotest_common.sh@650 -- # local es=0 00:26:06.347 05:06:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:06.347 05:06:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.347 05:06:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.347 05:06:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.347 05:06:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.347 05:06:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.347 05:06:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.347 05:06:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.347 05:06:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:06.347 05:06:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:06.347 [2024-11-18 05:06:29.822543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:06.347 [2024-11-18 05:06:29.822678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89327 ] 00:26:06.606 [2024-11-18 05:06:29.975289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.606 [2024-11-18 05:06:30.126979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.865 [2024-11-18 05:06:30.343110] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:06.865 [2024-11-18 05:06:30.343186] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:06.865 [2024-11-18 05:06:30.343217] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:07.433 [2024-11-18 05:06:30.899123] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:08.002 05:06:31 -- common/autotest_common.sh@653 -- # es=216 00:26:08.002 05:06:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:08.002 05:06:31 -- common/autotest_common.sh@662 -- # es=88 00:26:08.002 05:06:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:08.002 05:06:31 -- common/autotest_common.sh@670 -- # es=1 00:26:08.002 05:06:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:08.002 05:06:31 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:08.002 05:06:31 -- common/autotest_common.sh@650 -- # local es=0 00:26:08.002 05:06:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:08.002 05:06:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.002 05:06:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.002 05:06:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.002 05:06:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.002 05:06:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.002 05:06:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.002 05:06:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.002 05:06:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:08.002 05:06:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:08.002 [2024-11-18 05:06:31.305117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:08.002 [2024-11-18 05:06:31.305293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89349 ] 00:26:08.002 [2024-11-18 05:06:31.474057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.261 [2024-11-18 05:06:31.623850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.520 [2024-11-18 05:06:31.847606] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:08.520 [2024-11-18 05:06:31.847677] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:08.520 [2024-11-18 05:06:31.847695] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:09.088 [2024-11-18 05:06:32.395618] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:09.347 05:06:32 -- common/autotest_common.sh@653 -- # es=216 00:26:09.347 05:06:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:09.347 05:06:32 -- common/autotest_common.sh@662 -- # es=88 00:26:09.347 05:06:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:09.347 05:06:32 -- common/autotest_common.sh@670 -- # es=1 00:26:09.347 05:06:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:09.347 05:06:32 -- dd/posix.sh@46 -- # gen_bytes 512 00:26:09.347 05:06:32 -- dd/common.sh@98 -- # xtrace_disable 00:26:09.347 05:06:32 -- common/autotest_common.sh@10 -- # set +x 00:26:09.347 05:06:32 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:09.347 [2024-11-18 05:06:32.796563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:09.348 [2024-11-18 05:06:32.796734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89363 ] 00:26:09.607 [2024-11-18 05:06:32.964915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.607 [2024-11-18 05:06:33.116284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.866  [2024-11-18T05:06:34.328Z] Copying: 512/512 [B] (average 500 kBps) 00:26:10.804 00:26:10.804 05:06:34 -- dd/posix.sh@49 -- # [[ m3bqie9jfar2yftyriq17oglzffso4qgjmhmenzytnfgmfwq66vlo7llomniwrowry9o2coew5pt5olhdfiua0acueqyahk6otc246m65fm1jbnjxwb4jrjnt6f02qha5rabpho3r8tm4dsajaygchhs8dfgfhntzcsq3iurcozjcg6l3zw33rmhqywnliuflm7nqbejmxgljpa0e8spw242wrnl3etxng2vqyzunjvtum9vra9jn6lfmc08a2mzw3wpfb9x2t452nnrswvxj1us3rskf06momz48ib6mx1xeffs68i686cl80mar88idx6carjbees9386wf2shn4s7pk106suuaf1ist7cy53f9ypo1e09b7dsysb9c7qp3m9w9ouaac8u0h6ynol4cr1nvdy0g28xrpuy8pr6xk5yqyj8fltefcdjq0w528dmcbl3f096yviz5vzzvvbo7rogvw0t4tarxbq1immu92ymipynm4ico29zqcdvdh4i == \m\3\b\q\i\e\9\j\f\a\r\2\y\f\t\y\r\i\q\1\7\o\g\l\z\f\f\s\o\4\q\g\j\m\h\m\e\n\z\y\t\n\f\g\m\f\w\q\6\6\v\l\o\7\l\l\o\m\n\i\w\r\o\w\r\y\9\o\2\c\o\e\w\5\p\t\5\o\l\h\d\f\i\u\a\0\a\c\u\e\q\y\a\h\k\6\o\t\c\2\4\6\m\6\5\f\m\1\j\b\n\j\x\w\b\4\j\r\j\n\t\6\f\0\2\q\h\a\5\r\a\b\p\h\o\3\r\8\t\m\4\d\s\a\j\a\y\g\c\h\h\s\8\d\f\g\f\h\n\t\z\c\s\q\3\i\u\r\c\o\z\j\c\g\6\l\3\z\w\3\3\r\m\h\q\y\w\n\l\i\u\f\l\m\7\n\q\b\e\j\m\x\g\l\j\p\a\0\e\8\s\p\w\2\4\2\w\r\n\l\3\e\t\x\n\g\2\v\q\y\z\u\n\j\v\t\u\m\9\v\r\a\9\j\n\6\l\f\m\c\0\8\a\2\m\z\w\3\w\p\f\b\9\x\2\t\4\5\2\n\n\r\s\w\v\x\j\1\u\s\3\r\s\k\f\0\6\m\o\m\z\4\8\i\b\6\m\x\1\x\e\f\f\s\6\8\i\6\8\6\c\l\8\0\m\a\r\8\8\i\d\x\6\c\a\r\j\b\e\e\s\9\3\8\6\w\f\2\s\h\n\4\s\7\p\k\1\0\6\s\u\u\a\f\1\i\s\t\7\c\y\5\3\f\9\y\p\o\1\e\0\9\b\7\d\s\y\s\b\9\c\7\q\p\3\m\9\w\9\o\u\a\a\c\8\u\0\h\6\y\n\o\l\4\c\r\1\n\v\d\y\0\g\2\8\x\r\p\u\y\8\p\r\6\x\k\5\y\q\y\j\8\f\l\t\e\f\c\d\j\q\0\w\5\2\8\d\m\c\b\l\3\f\0\9\6\y\v\i\z\5\v\z\z\v\v\b\o\7\r\o\g\v\w\0\t\4\t\a\r\x\b\q\1\i\m\m\u\9\2\y\m\i\p\y\n\m\4\i\c\o\2\9\z\q\c\d\v\d\h\4\i ]] 00:26:10.804 00:26:10.804 real 0m4.464s 00:26:10.804 user 0m3.611s 00:26:10.804 sys 0m0.539s 00:26:10.804 ************************************ 00:26:10.804 END TEST dd_flag_nofollow_forced_aio 00:26:10.804 ************************************ 00:26:10.804 05:06:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:10.804 05:06:34 -- common/autotest_common.sh@10 -- # set +x 00:26:10.804 05:06:34 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:26:10.804 05:06:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:10.804 05:06:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:10.804 05:06:34 -- common/autotest_common.sh@10 -- # set +x 00:26:10.804 ************************************ 00:26:10.804 START TEST dd_flag_noatime_forced_aio 00:26:10.804 ************************************ 00:26:10.804 05:06:34 -- common/autotest_common.sh@1114 -- # noatime 00:26:10.804 05:06:34 -- dd/posix.sh@53 -- # local atime_if 00:26:10.804 05:06:34 -- dd/posix.sh@54 -- # local atime_of 00:26:10.804 05:06:34 -- dd/posix.sh@58 -- # gen_bytes 512 00:26:10.804 05:06:34 -- dd/common.sh@98 -- # xtrace_disable 00:26:10.804 05:06:34 -- common/autotest_common.sh@10 -- # set +x 00:26:10.804 05:06:34 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:10.804 05:06:34 -- dd/posix.sh@60 -- # atime_if=1731906393 00:26:10.804 05:06:34 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:10.804 05:06:34 -- dd/posix.sh@61 -- # atime_of=1731906394 00:26:10.804 05:06:34 -- dd/posix.sh@66 -- # sleep 1 00:26:12.259 05:06:35 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:12.259 [2024-11-18 05:06:35.355469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:12.259 [2024-11-18 05:06:35.355641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89416 ] 00:26:12.259 [2024-11-18 05:06:35.515890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.259 [2024-11-18 05:06:35.666550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.518  [2024-11-18T05:06:36.979Z] Copying: 512/512 [B] (average 500 kBps) 00:26:13.455 00:26:13.455 05:06:36 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:13.455 05:06:36 -- dd/posix.sh@69 -- # (( atime_if == 1731906393 )) 00:26:13.455 05:06:36 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:13.455 05:06:36 -- dd/posix.sh@70 -- # (( atime_of == 1731906394 )) 00:26:13.455 05:06:36 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:13.455 [2024-11-18 05:06:36.852490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:13.455 [2024-11-18 05:06:36.853377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89438 ] 00:26:13.714 [2024-11-18 05:06:37.015515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.714 [2024-11-18 05:06:37.160951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.974  [2024-11-18T05:06:38.435Z] Copying: 512/512 [B] (average 500 kBps) 00:26:14.911 00:26:14.911 05:06:38 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:14.911 05:06:38 -- dd/posix.sh@73 -- # (( atime_if < 1731906397 )) 00:26:14.911 00:26:14.911 real 0m4.005s 00:26:14.911 user 0m2.403s 00:26:14.911 sys 0m0.375s 00:26:14.911 ************************************ 00:26:14.911 END TEST dd_flag_noatime_forced_aio 00:26:14.911 ************************************ 00:26:14.911 05:06:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:14.911 05:06:38 -- common/autotest_common.sh@10 -- # set +x 00:26:14.911 05:06:38 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:26:14.911 05:06:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:14.912 05:06:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:14.912 05:06:38 -- common/autotest_common.sh@10 -- # set +x 00:26:14.912 ************************************ 00:26:14.912 START TEST dd_flags_misc_forced_aio 00:26:14.912 ************************************ 00:26:14.912 05:06:38 -- common/autotest_common.sh@1114 -- # io 00:26:14.912 05:06:38 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:26:14.912 05:06:38 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:26:14.912 05:06:38 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:26:14.912 05:06:38 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:14.912 05:06:38 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:14.912 05:06:38 -- dd/common.sh@98 -- # xtrace_disable 00:26:14.912 05:06:38 -- common/autotest_common.sh@10 -- # set +x 00:26:14.912 05:06:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:14.912 05:06:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:14.912 [2024-11-18 05:06:38.392915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:14.912 [2024-11-18 05:06:38.393043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89474 ] 00:26:15.171 [2024-11-18 05:06:38.546778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.430 [2024-11-18 05:06:38.693277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.430  [2024-11-18T05:06:39.891Z] Copying: 512/512 [B] (average 500 kBps) 00:26:16.367 00:26:16.367 05:06:39 -- dd/posix.sh@93 -- # [[ 7c2bjk07et82zk27twv6q38pkraof7hf990et456uyfxstxnqk1dbit2msdyfpz959t9ygvbl4rmbi7gbaenubsd5ye6xbzwy1piqk6i2r14fumngt5jedvc7hjmclfuizg6nx435yi79ku7gsq8r8xaxrkse4mwk0a5m2ap0d6ycqjrc4sg40nmm25xuu95abpmnjpvco8m47u2zgck0tnanf3uthgnkpm8xav7a6082ko5azvnkzqxcmrbinkpq78dq89b0aj9dypyfg74752vp9qcp3j47l3o4c5dz7jfdkdbnav1mrt2x0cpnz59zpjn0fd9059dixv0ajspdx7zxeth6sa8s7zcyujkok9ov7yxs3xmqemsg9tfuphrox4by4b168794w48c8nw0czji0ckeoe9q5zwlska0g74n2h7jtsa3mv59tyf9c8ukp6ywgy4kcopv5we0gdvdf4gjeteks8pzhg71wvhqx9z62ln1j7dpcf4purluo63 == \7\c\2\b\j\k\0\7\e\t\8\2\z\k\2\7\t\w\v\6\q\3\8\p\k\r\a\o\f\7\h\f\9\9\0\e\t\4\5\6\u\y\f\x\s\t\x\n\q\k\1\d\b\i\t\2\m\s\d\y\f\p\z\9\5\9\t\9\y\g\v\b\l\4\r\m\b\i\7\g\b\a\e\n\u\b\s\d\5\y\e\6\x\b\z\w\y\1\p\i\q\k\6\i\2\r\1\4\f\u\m\n\g\t\5\j\e\d\v\c\7\h\j\m\c\l\f\u\i\z\g\6\n\x\4\3\5\y\i\7\9\k\u\7\g\s\q\8\r\8\x\a\x\r\k\s\e\4\m\w\k\0\a\5\m\2\a\p\0\d\6\y\c\q\j\r\c\4\s\g\4\0\n\m\m\2\5\x\u\u\9\5\a\b\p\m\n\j\p\v\c\o\8\m\4\7\u\2\z\g\c\k\0\t\n\a\n\f\3\u\t\h\g\n\k\p\m\8\x\a\v\7\a\6\0\8\2\k\o\5\a\z\v\n\k\z\q\x\c\m\r\b\i\n\k\p\q\7\8\d\q\8\9\b\0\a\j\9\d\y\p\y\f\g\7\4\7\5\2\v\p\9\q\c\p\3\j\4\7\l\3\o\4\c\5\d\z\7\j\f\d\k\d\b\n\a\v\1\m\r\t\2\x\0\c\p\n\z\5\9\z\p\j\n\0\f\d\9\0\5\9\d\i\x\v\0\a\j\s\p\d\x\7\z\x\e\t\h\6\s\a\8\s\7\z\c\y\u\j\k\o\k\9\o\v\7\y\x\s\3\x\m\q\e\m\s\g\9\t\f\u\p\h\r\o\x\4\b\y\4\b\1\6\8\7\9\4\w\4\8\c\8\n\w\0\c\z\j\i\0\c\k\e\o\e\9\q\5\z\w\l\s\k\a\0\g\7\4\n\2\h\7\j\t\s\a\3\m\v\5\9\t\y\f\9\c\8\u\k\p\6\y\w\g\y\4\k\c\o\p\v\5\w\e\0\g\d\v\d\f\4\g\j\e\t\e\k\s\8\p\z\h\g\7\1\w\v\h\q\x\9\z\6\2\l\n\1\j\7\d\p\c\f\4\p\u\r\l\u\o\6\3 ]] 00:26:16.367 05:06:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:16.367 05:06:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:16.367 [2024-11-18 05:06:39.871305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:16.367 [2024-11-18 05:06:39.871465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89492 ] 00:26:16.627 [2024-11-18 05:06:40.041212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.887 [2024-11-18 05:06:40.189747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.146  [2024-11-18T05:06:41.609Z] Copying: 512/512 [B] (average 500 kBps) 00:26:18.085 00:26:18.085 05:06:41 -- dd/posix.sh@93 -- # [[ 7c2bjk07et82zk27twv6q38pkraof7hf990et456uyfxstxnqk1dbit2msdyfpz959t9ygvbl4rmbi7gbaenubsd5ye6xbzwy1piqk6i2r14fumngt5jedvc7hjmclfuizg6nx435yi79ku7gsq8r8xaxrkse4mwk0a5m2ap0d6ycqjrc4sg40nmm25xuu95abpmnjpvco8m47u2zgck0tnanf3uthgnkpm8xav7a6082ko5azvnkzqxcmrbinkpq78dq89b0aj9dypyfg74752vp9qcp3j47l3o4c5dz7jfdkdbnav1mrt2x0cpnz59zpjn0fd9059dixv0ajspdx7zxeth6sa8s7zcyujkok9ov7yxs3xmqemsg9tfuphrox4by4b168794w48c8nw0czji0ckeoe9q5zwlska0g74n2h7jtsa3mv59tyf9c8ukp6ywgy4kcopv5we0gdvdf4gjeteks8pzhg71wvhqx9z62ln1j7dpcf4purluo63 == \7\c\2\b\j\k\0\7\e\t\8\2\z\k\2\7\t\w\v\6\q\3\8\p\k\r\a\o\f\7\h\f\9\9\0\e\t\4\5\6\u\y\f\x\s\t\x\n\q\k\1\d\b\i\t\2\m\s\d\y\f\p\z\9\5\9\t\9\y\g\v\b\l\4\r\m\b\i\7\g\b\a\e\n\u\b\s\d\5\y\e\6\x\b\z\w\y\1\p\i\q\k\6\i\2\r\1\4\f\u\m\n\g\t\5\j\e\d\v\c\7\h\j\m\c\l\f\u\i\z\g\6\n\x\4\3\5\y\i\7\9\k\u\7\g\s\q\8\r\8\x\a\x\r\k\s\e\4\m\w\k\0\a\5\m\2\a\p\0\d\6\y\c\q\j\r\c\4\s\g\4\0\n\m\m\2\5\x\u\u\9\5\a\b\p\m\n\j\p\v\c\o\8\m\4\7\u\2\z\g\c\k\0\t\n\a\n\f\3\u\t\h\g\n\k\p\m\8\x\a\v\7\a\6\0\8\2\k\o\5\a\z\v\n\k\z\q\x\c\m\r\b\i\n\k\p\q\7\8\d\q\8\9\b\0\a\j\9\d\y\p\y\f\g\7\4\7\5\2\v\p\9\q\c\p\3\j\4\7\l\3\o\4\c\5\d\z\7\j\f\d\k\d\b\n\a\v\1\m\r\t\2\x\0\c\p\n\z\5\9\z\p\j\n\0\f\d\9\0\5\9\d\i\x\v\0\a\j\s\p\d\x\7\z\x\e\t\h\6\s\a\8\s\7\z\c\y\u\j\k\o\k\9\o\v\7\y\x\s\3\x\m\q\e\m\s\g\9\t\f\u\p\h\r\o\x\4\b\y\4\b\1\6\8\7\9\4\w\4\8\c\8\n\w\0\c\z\j\i\0\c\k\e\o\e\9\q\5\z\w\l\s\k\a\0\g\7\4\n\2\h\7\j\t\s\a\3\m\v\5\9\t\y\f\9\c\8\u\k\p\6\y\w\g\y\4\k\c\o\p\v\5\w\e\0\g\d\v\d\f\4\g\j\e\t\e\k\s\8\p\z\h\g\7\1\w\v\h\q\x\9\z\6\2\l\n\1\j\7\d\p\c\f\4\p\u\r\l\u\o\6\3 ]] 00:26:18.085 05:06:41 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:18.085 05:06:41 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:18.085 [2024-11-18 05:06:41.370557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:18.085 [2024-11-18 05:06:41.370720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89506 ] 00:26:18.085 [2024-11-18 05:06:41.539869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.345 [2024-11-18 05:06:41.696956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.604  [2024-11-18T05:06:43.066Z] Copying: 512/512 [B] (average 125 kBps) 00:26:19.542 00:26:19.542 05:06:42 -- dd/posix.sh@93 -- # [[ 7c2bjk07et82zk27twv6q38pkraof7hf990et456uyfxstxnqk1dbit2msdyfpz959t9ygvbl4rmbi7gbaenubsd5ye6xbzwy1piqk6i2r14fumngt5jedvc7hjmclfuizg6nx435yi79ku7gsq8r8xaxrkse4mwk0a5m2ap0d6ycqjrc4sg40nmm25xuu95abpmnjpvco8m47u2zgck0tnanf3uthgnkpm8xav7a6082ko5azvnkzqxcmrbinkpq78dq89b0aj9dypyfg74752vp9qcp3j47l3o4c5dz7jfdkdbnav1mrt2x0cpnz59zpjn0fd9059dixv0ajspdx7zxeth6sa8s7zcyujkok9ov7yxs3xmqemsg9tfuphrox4by4b168794w48c8nw0czji0ckeoe9q5zwlska0g74n2h7jtsa3mv59tyf9c8ukp6ywgy4kcopv5we0gdvdf4gjeteks8pzhg71wvhqx9z62ln1j7dpcf4purluo63 == \7\c\2\b\j\k\0\7\e\t\8\2\z\k\2\7\t\w\v\6\q\3\8\p\k\r\a\o\f\7\h\f\9\9\0\e\t\4\5\6\u\y\f\x\s\t\x\n\q\k\1\d\b\i\t\2\m\s\d\y\f\p\z\9\5\9\t\9\y\g\v\b\l\4\r\m\b\i\7\g\b\a\e\n\u\b\s\d\5\y\e\6\x\b\z\w\y\1\p\i\q\k\6\i\2\r\1\4\f\u\m\n\g\t\5\j\e\d\v\c\7\h\j\m\c\l\f\u\i\z\g\6\n\x\4\3\5\y\i\7\9\k\u\7\g\s\q\8\r\8\x\a\x\r\k\s\e\4\m\w\k\0\a\5\m\2\a\p\0\d\6\y\c\q\j\r\c\4\s\g\4\0\n\m\m\2\5\x\u\u\9\5\a\b\p\m\n\j\p\v\c\o\8\m\4\7\u\2\z\g\c\k\0\t\n\a\n\f\3\u\t\h\g\n\k\p\m\8\x\a\v\7\a\6\0\8\2\k\o\5\a\z\v\n\k\z\q\x\c\m\r\b\i\n\k\p\q\7\8\d\q\8\9\b\0\a\j\9\d\y\p\y\f\g\7\4\7\5\2\v\p\9\q\c\p\3\j\4\7\l\3\o\4\c\5\d\z\7\j\f\d\k\d\b\n\a\v\1\m\r\t\2\x\0\c\p\n\z\5\9\z\p\j\n\0\f\d\9\0\5\9\d\i\x\v\0\a\j\s\p\d\x\7\z\x\e\t\h\6\s\a\8\s\7\z\c\y\u\j\k\o\k\9\o\v\7\y\x\s\3\x\m\q\e\m\s\g\9\t\f\u\p\h\r\o\x\4\b\y\4\b\1\6\8\7\9\4\w\4\8\c\8\n\w\0\c\z\j\i\0\c\k\e\o\e\9\q\5\z\w\l\s\k\a\0\g\7\4\n\2\h\7\j\t\s\a\3\m\v\5\9\t\y\f\9\c\8\u\k\p\6\y\w\g\y\4\k\c\o\p\v\5\w\e\0\g\d\v\d\f\4\g\j\e\t\e\k\s\8\p\z\h\g\7\1\w\v\h\q\x\9\z\6\2\l\n\1\j\7\d\p\c\f\4\p\u\r\l\u\o\6\3 ]] 00:26:19.542 05:06:42 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:19.542 05:06:42 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:19.542 [2024-11-18 05:06:42.881546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:19.543 [2024-11-18 05:06:42.881711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89530 ] 00:26:19.543 [2024-11-18 05:06:43.050290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.802 [2024-11-18 05:06:43.197035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.061  [2024-11-18T05:06:44.522Z] Copying: 512/512 [B] (average 100 kBps) 00:26:20.998 00:26:20.998 05:06:44 -- dd/posix.sh@93 -- # [[ 7c2bjk07et82zk27twv6q38pkraof7hf990et456uyfxstxnqk1dbit2msdyfpz959t9ygvbl4rmbi7gbaenubsd5ye6xbzwy1piqk6i2r14fumngt5jedvc7hjmclfuizg6nx435yi79ku7gsq8r8xaxrkse4mwk0a5m2ap0d6ycqjrc4sg40nmm25xuu95abpmnjpvco8m47u2zgck0tnanf3uthgnkpm8xav7a6082ko5azvnkzqxcmrbinkpq78dq89b0aj9dypyfg74752vp9qcp3j47l3o4c5dz7jfdkdbnav1mrt2x0cpnz59zpjn0fd9059dixv0ajspdx7zxeth6sa8s7zcyujkok9ov7yxs3xmqemsg9tfuphrox4by4b168794w48c8nw0czji0ckeoe9q5zwlska0g74n2h7jtsa3mv59tyf9c8ukp6ywgy4kcopv5we0gdvdf4gjeteks8pzhg71wvhqx9z62ln1j7dpcf4purluo63 == \7\c\2\b\j\k\0\7\e\t\8\2\z\k\2\7\t\w\v\6\q\3\8\p\k\r\a\o\f\7\h\f\9\9\0\e\t\4\5\6\u\y\f\x\s\t\x\n\q\k\1\d\b\i\t\2\m\s\d\y\f\p\z\9\5\9\t\9\y\g\v\b\l\4\r\m\b\i\7\g\b\a\e\n\u\b\s\d\5\y\e\6\x\b\z\w\y\1\p\i\q\k\6\i\2\r\1\4\f\u\m\n\g\t\5\j\e\d\v\c\7\h\j\m\c\l\f\u\i\z\g\6\n\x\4\3\5\y\i\7\9\k\u\7\g\s\q\8\r\8\x\a\x\r\k\s\e\4\m\w\k\0\a\5\m\2\a\p\0\d\6\y\c\q\j\r\c\4\s\g\4\0\n\m\m\2\5\x\u\u\9\5\a\b\p\m\n\j\p\v\c\o\8\m\4\7\u\2\z\g\c\k\0\t\n\a\n\f\3\u\t\h\g\n\k\p\m\8\x\a\v\7\a\6\0\8\2\k\o\5\a\z\v\n\k\z\q\x\c\m\r\b\i\n\k\p\q\7\8\d\q\8\9\b\0\a\j\9\d\y\p\y\f\g\7\4\7\5\2\v\p\9\q\c\p\3\j\4\7\l\3\o\4\c\5\d\z\7\j\f\d\k\d\b\n\a\v\1\m\r\t\2\x\0\c\p\n\z\5\9\z\p\j\n\0\f\d\9\0\5\9\d\i\x\v\0\a\j\s\p\d\x\7\z\x\e\t\h\6\s\a\8\s\7\z\c\y\u\j\k\o\k\9\o\v\7\y\x\s\3\x\m\q\e\m\s\g\9\t\f\u\p\h\r\o\x\4\b\y\4\b\1\6\8\7\9\4\w\4\8\c\8\n\w\0\c\z\j\i\0\c\k\e\o\e\9\q\5\z\w\l\s\k\a\0\g\7\4\n\2\h\7\j\t\s\a\3\m\v\5\9\t\y\f\9\c\8\u\k\p\6\y\w\g\y\4\k\c\o\p\v\5\w\e\0\g\d\v\d\f\4\g\j\e\t\e\k\s\8\p\z\h\g\7\1\w\v\h\q\x\9\z\6\2\l\n\1\j\7\d\p\c\f\4\p\u\r\l\u\o\6\3 ]] 00:26:20.998 05:06:44 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:20.998 05:06:44 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:20.998 05:06:44 -- dd/common.sh@98 -- # xtrace_disable 00:26:20.998 05:06:44 -- common/autotest_common.sh@10 -- # set +x 00:26:20.998 05:06:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:20.998 05:06:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:20.998 [2024-11-18 05:06:44.383585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:20.998 [2024-11-18 05:06:44.383712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89545 ] 00:26:21.257 [2024-11-18 05:06:44.536883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.257 [2024-11-18 05:06:44.682534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.517  [2024-11-18T05:06:45.979Z] Copying: 512/512 [B] (average 500 kBps) 00:26:22.455 00:26:22.455 05:06:45 -- dd/posix.sh@93 -- # [[ jy85ffsob18hj3ietozsx70pqr1sy4jxefkz0pstafdpsez2138sftb5sfotos6oyewyuwyalbvl3mg3b3szgcl070vs04clxr3s5ckrhrg2omskh94b6lp9lkv709t9hqzeurbqs8yho3qql1d6w3ibwt48h1enu8qb80o3y166mg7fdxls4esjvddgg9gxaigwuh6ic8kxj5ut4ig80262rl7ot8vrnwz1l4jwalb7iglgwo5cdrhigm9jdac0n8d7jmmm74izdjuuxoszg4zmh00gklnp6v4au7egn969j6tu7cmxwlmjy43vjexkd5pkzrgiob3s7u9j5spskogh5i60xvx4pvc3txp9uftucdqu5td4rs9fn7ck2pl578k1re5fo8yysdtqni0tnxbxb4zez5s99a6cs0gtwn5e3p10pa2drr5crza4kvep8vfhxbl8du241cuoi3abl1ug5rqtaoxtgqdwh5gxw3yktzpb4wiyzx82rfuqwzxt == \j\y\8\5\f\f\s\o\b\1\8\h\j\3\i\e\t\o\z\s\x\7\0\p\q\r\1\s\y\4\j\x\e\f\k\z\0\p\s\t\a\f\d\p\s\e\z\2\1\3\8\s\f\t\b\5\s\f\o\t\o\s\6\o\y\e\w\y\u\w\y\a\l\b\v\l\3\m\g\3\b\3\s\z\g\c\l\0\7\0\v\s\0\4\c\l\x\r\3\s\5\c\k\r\h\r\g\2\o\m\s\k\h\9\4\b\6\l\p\9\l\k\v\7\0\9\t\9\h\q\z\e\u\r\b\q\s\8\y\h\o\3\q\q\l\1\d\6\w\3\i\b\w\t\4\8\h\1\e\n\u\8\q\b\8\0\o\3\y\1\6\6\m\g\7\f\d\x\l\s\4\e\s\j\v\d\d\g\g\9\g\x\a\i\g\w\u\h\6\i\c\8\k\x\j\5\u\t\4\i\g\8\0\2\6\2\r\l\7\o\t\8\v\r\n\w\z\1\l\4\j\w\a\l\b\7\i\g\l\g\w\o\5\c\d\r\h\i\g\m\9\j\d\a\c\0\n\8\d\7\j\m\m\m\7\4\i\z\d\j\u\u\x\o\s\z\g\4\z\m\h\0\0\g\k\l\n\p\6\v\4\a\u\7\e\g\n\9\6\9\j\6\t\u\7\c\m\x\w\l\m\j\y\4\3\v\j\e\x\k\d\5\p\k\z\r\g\i\o\b\3\s\7\u\9\j\5\s\p\s\k\o\g\h\5\i\6\0\x\v\x\4\p\v\c\3\t\x\p\9\u\f\t\u\c\d\q\u\5\t\d\4\r\s\9\f\n\7\c\k\2\p\l\5\7\8\k\1\r\e\5\f\o\8\y\y\s\d\t\q\n\i\0\t\n\x\b\x\b\4\z\e\z\5\s\9\9\a\6\c\s\0\g\t\w\n\5\e\3\p\1\0\p\a\2\d\r\r\5\c\r\z\a\4\k\v\e\p\8\v\f\h\x\b\l\8\d\u\2\4\1\c\u\o\i\3\a\b\l\1\u\g\5\r\q\t\a\o\x\t\g\q\d\w\h\5\g\x\w\3\y\k\t\z\p\b\4\w\i\y\z\x\8\2\r\f\u\q\w\z\x\t ]] 00:26:22.455 05:06:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:22.455 05:06:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:22.455 [2024-11-18 05:06:45.866595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:22.455 [2024-11-18 05:06:45.866746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89565 ] 00:26:22.714 [2024-11-18 05:06:46.035926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.714 [2024-11-18 05:06:46.183453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.973  [2024-11-18T05:06:47.434Z] Copying: 512/512 [B] (average 500 kBps) 00:26:23.911 00:26:23.911 05:06:47 -- dd/posix.sh@93 -- # [[ jy85ffsob18hj3ietozsx70pqr1sy4jxefkz0pstafdpsez2138sftb5sfotos6oyewyuwyalbvl3mg3b3szgcl070vs04clxr3s5ckrhrg2omskh94b6lp9lkv709t9hqzeurbqs8yho3qql1d6w3ibwt48h1enu8qb80o3y166mg7fdxls4esjvddgg9gxaigwuh6ic8kxj5ut4ig80262rl7ot8vrnwz1l4jwalb7iglgwo5cdrhigm9jdac0n8d7jmmm74izdjuuxoszg4zmh00gklnp6v4au7egn969j6tu7cmxwlmjy43vjexkd5pkzrgiob3s7u9j5spskogh5i60xvx4pvc3txp9uftucdqu5td4rs9fn7ck2pl578k1re5fo8yysdtqni0tnxbxb4zez5s99a6cs0gtwn5e3p10pa2drr5crza4kvep8vfhxbl8du241cuoi3abl1ug5rqtaoxtgqdwh5gxw3yktzpb4wiyzx82rfuqwzxt == \j\y\8\5\f\f\s\o\b\1\8\h\j\3\i\e\t\o\z\s\x\7\0\p\q\r\1\s\y\4\j\x\e\f\k\z\0\p\s\t\a\f\d\p\s\e\z\2\1\3\8\s\f\t\b\5\s\f\o\t\o\s\6\o\y\e\w\y\u\w\y\a\l\b\v\l\3\m\g\3\b\3\s\z\g\c\l\0\7\0\v\s\0\4\c\l\x\r\3\s\5\c\k\r\h\r\g\2\o\m\s\k\h\9\4\b\6\l\p\9\l\k\v\7\0\9\t\9\h\q\z\e\u\r\b\q\s\8\y\h\o\3\q\q\l\1\d\6\w\3\i\b\w\t\4\8\h\1\e\n\u\8\q\b\8\0\o\3\y\1\6\6\m\g\7\f\d\x\l\s\4\e\s\j\v\d\d\g\g\9\g\x\a\i\g\w\u\h\6\i\c\8\k\x\j\5\u\t\4\i\g\8\0\2\6\2\r\l\7\o\t\8\v\r\n\w\z\1\l\4\j\w\a\l\b\7\i\g\l\g\w\o\5\c\d\r\h\i\g\m\9\j\d\a\c\0\n\8\d\7\j\m\m\m\7\4\i\z\d\j\u\u\x\o\s\z\g\4\z\m\h\0\0\g\k\l\n\p\6\v\4\a\u\7\e\g\n\9\6\9\j\6\t\u\7\c\m\x\w\l\m\j\y\4\3\v\j\e\x\k\d\5\p\k\z\r\g\i\o\b\3\s\7\u\9\j\5\s\p\s\k\o\g\h\5\i\6\0\x\v\x\4\p\v\c\3\t\x\p\9\u\f\t\u\c\d\q\u\5\t\d\4\r\s\9\f\n\7\c\k\2\p\l\5\7\8\k\1\r\e\5\f\o\8\y\y\s\d\t\q\n\i\0\t\n\x\b\x\b\4\z\e\z\5\s\9\9\a\6\c\s\0\g\t\w\n\5\e\3\p\1\0\p\a\2\d\r\r\5\c\r\z\a\4\k\v\e\p\8\v\f\h\x\b\l\8\d\u\2\4\1\c\u\o\i\3\a\b\l\1\u\g\5\r\q\t\a\o\x\t\g\q\d\w\h\5\g\x\w\3\y\k\t\z\p\b\4\w\i\y\z\x\8\2\r\f\u\q\w\z\x\t ]] 00:26:23.911 05:06:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:23.911 05:06:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:23.911 [2024-11-18 05:06:47.366365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:23.911 [2024-11-18 05:06:47.366526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89579 ] 00:26:24.170 [2024-11-18 05:06:47.535666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.170 [2024-11-18 05:06:47.685960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.429  [2024-11-18T05:06:48.892Z] Copying: 512/512 [B] (average 100 kBps) 00:26:25.368 00:26:25.368 05:06:48 -- dd/posix.sh@93 -- # [[ jy85ffsob18hj3ietozsx70pqr1sy4jxefkz0pstafdpsez2138sftb5sfotos6oyewyuwyalbvl3mg3b3szgcl070vs04clxr3s5ckrhrg2omskh94b6lp9lkv709t9hqzeurbqs8yho3qql1d6w3ibwt48h1enu8qb80o3y166mg7fdxls4esjvddgg9gxaigwuh6ic8kxj5ut4ig80262rl7ot8vrnwz1l4jwalb7iglgwo5cdrhigm9jdac0n8d7jmmm74izdjuuxoszg4zmh00gklnp6v4au7egn969j6tu7cmxwlmjy43vjexkd5pkzrgiob3s7u9j5spskogh5i60xvx4pvc3txp9uftucdqu5td4rs9fn7ck2pl578k1re5fo8yysdtqni0tnxbxb4zez5s99a6cs0gtwn5e3p10pa2drr5crza4kvep8vfhxbl8du241cuoi3abl1ug5rqtaoxtgqdwh5gxw3yktzpb4wiyzx82rfuqwzxt == \j\y\8\5\f\f\s\o\b\1\8\h\j\3\i\e\t\o\z\s\x\7\0\p\q\r\1\s\y\4\j\x\e\f\k\z\0\p\s\t\a\f\d\p\s\e\z\2\1\3\8\s\f\t\b\5\s\f\o\t\o\s\6\o\y\e\w\y\u\w\y\a\l\b\v\l\3\m\g\3\b\3\s\z\g\c\l\0\7\0\v\s\0\4\c\l\x\r\3\s\5\c\k\r\h\r\g\2\o\m\s\k\h\9\4\b\6\l\p\9\l\k\v\7\0\9\t\9\h\q\z\e\u\r\b\q\s\8\y\h\o\3\q\q\l\1\d\6\w\3\i\b\w\t\4\8\h\1\e\n\u\8\q\b\8\0\o\3\y\1\6\6\m\g\7\f\d\x\l\s\4\e\s\j\v\d\d\g\g\9\g\x\a\i\g\w\u\h\6\i\c\8\k\x\j\5\u\t\4\i\g\8\0\2\6\2\r\l\7\o\t\8\v\r\n\w\z\1\l\4\j\w\a\l\b\7\i\g\l\g\w\o\5\c\d\r\h\i\g\m\9\j\d\a\c\0\n\8\d\7\j\m\m\m\7\4\i\z\d\j\u\u\x\o\s\z\g\4\z\m\h\0\0\g\k\l\n\p\6\v\4\a\u\7\e\g\n\9\6\9\j\6\t\u\7\c\m\x\w\l\m\j\y\4\3\v\j\e\x\k\d\5\p\k\z\r\g\i\o\b\3\s\7\u\9\j\5\s\p\s\k\o\g\h\5\i\6\0\x\v\x\4\p\v\c\3\t\x\p\9\u\f\t\u\c\d\q\u\5\t\d\4\r\s\9\f\n\7\c\k\2\p\l\5\7\8\k\1\r\e\5\f\o\8\y\y\s\d\t\q\n\i\0\t\n\x\b\x\b\4\z\e\z\5\s\9\9\a\6\c\s\0\g\t\w\n\5\e\3\p\1\0\p\a\2\d\r\r\5\c\r\z\a\4\k\v\e\p\8\v\f\h\x\b\l\8\d\u\2\4\1\c\u\o\i\3\a\b\l\1\u\g\5\r\q\t\a\o\x\t\g\q\d\w\h\5\g\x\w\3\y\k\t\z\p\b\4\w\i\y\z\x\8\2\r\f\u\q\w\z\x\t ]] 00:26:25.368 05:06:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:25.368 05:06:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:25.368 [2024-11-18 05:06:48.878325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:25.368 [2024-11-18 05:06:48.878486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89599 ] 00:26:25.627 [2024-11-18 05:06:49.047567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.885 [2024-11-18 05:06:49.200074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.143  [2024-11-18T05:06:50.604Z] Copying: 512/512 [B] (average 125 kBps) 00:26:27.080 00:26:27.080 ************************************ 00:26:27.080 END TEST dd_flags_misc_forced_aio 00:26:27.080 ************************************ 00:26:27.080 05:06:50 -- dd/posix.sh@93 -- # [[ jy85ffsob18hj3ietozsx70pqr1sy4jxefkz0pstafdpsez2138sftb5sfotos6oyewyuwyalbvl3mg3b3szgcl070vs04clxr3s5ckrhrg2omskh94b6lp9lkv709t9hqzeurbqs8yho3qql1d6w3ibwt48h1enu8qb80o3y166mg7fdxls4esjvddgg9gxaigwuh6ic8kxj5ut4ig80262rl7ot8vrnwz1l4jwalb7iglgwo5cdrhigm9jdac0n8d7jmmm74izdjuuxoszg4zmh00gklnp6v4au7egn969j6tu7cmxwlmjy43vjexkd5pkzrgiob3s7u9j5spskogh5i60xvx4pvc3txp9uftucdqu5td4rs9fn7ck2pl578k1re5fo8yysdtqni0tnxbxb4zez5s99a6cs0gtwn5e3p10pa2drr5crza4kvep8vfhxbl8du241cuoi3abl1ug5rqtaoxtgqdwh5gxw3yktzpb4wiyzx82rfuqwzxt == \j\y\8\5\f\f\s\o\b\1\8\h\j\3\i\e\t\o\z\s\x\7\0\p\q\r\1\s\y\4\j\x\e\f\k\z\0\p\s\t\a\f\d\p\s\e\z\2\1\3\8\s\f\t\b\5\s\f\o\t\o\s\6\o\y\e\w\y\u\w\y\a\l\b\v\l\3\m\g\3\b\3\s\z\g\c\l\0\7\0\v\s\0\4\c\l\x\r\3\s\5\c\k\r\h\r\g\2\o\m\s\k\h\9\4\b\6\l\p\9\l\k\v\7\0\9\t\9\h\q\z\e\u\r\b\q\s\8\y\h\o\3\q\q\l\1\d\6\w\3\i\b\w\t\4\8\h\1\e\n\u\8\q\b\8\0\o\3\y\1\6\6\m\g\7\f\d\x\l\s\4\e\s\j\v\d\d\g\g\9\g\x\a\i\g\w\u\h\6\i\c\8\k\x\j\5\u\t\4\i\g\8\0\2\6\2\r\l\7\o\t\8\v\r\n\w\z\1\l\4\j\w\a\l\b\7\i\g\l\g\w\o\5\c\d\r\h\i\g\m\9\j\d\a\c\0\n\8\d\7\j\m\m\m\7\4\i\z\d\j\u\u\x\o\s\z\g\4\z\m\h\0\0\g\k\l\n\p\6\v\4\a\u\7\e\g\n\9\6\9\j\6\t\u\7\c\m\x\w\l\m\j\y\4\3\v\j\e\x\k\d\5\p\k\z\r\g\i\o\b\3\s\7\u\9\j\5\s\p\s\k\o\g\h\5\i\6\0\x\v\x\4\p\v\c\3\t\x\p\9\u\f\t\u\c\d\q\u\5\t\d\4\r\s\9\f\n\7\c\k\2\p\l\5\7\8\k\1\r\e\5\f\o\8\y\y\s\d\t\q\n\i\0\t\n\x\b\x\b\4\z\e\z\5\s\9\9\a\6\c\s\0\g\t\w\n\5\e\3\p\1\0\p\a\2\d\r\r\5\c\r\z\a\4\k\v\e\p\8\v\f\h\x\b\l\8\d\u\2\4\1\c\u\o\i\3\a\b\l\1\u\g\5\r\q\t\a\o\x\t\g\q\d\w\h\5\g\x\w\3\y\k\t\z\p\b\4\w\i\y\z\x\8\2\r\f\u\q\w\z\x\t ]] 00:26:27.080 00:26:27.080 real 0m11.998s 00:26:27.080 user 0m9.613s 00:26:27.080 sys 0m1.448s 00:26:27.080 05:06:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:27.080 05:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.080 05:06:50 -- dd/posix.sh@1 -- # cleanup 00:26:27.080 05:06:50 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:27.080 05:06:50 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:27.080 00:26:27.080 real 0m50.895s 00:26:27.080 user 0m38.954s 00:26:27.080 sys 0m6.299s 00:26:27.080 05:06:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:27.080 05:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.080 ************************************ 00:26:27.080 END TEST spdk_dd_posix 00:26:27.080 ************************************ 00:26:27.080 05:06:50 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:26:27.080 05:06:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:27.081 05:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:27.081 05:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.081 ************************************ 00:26:27.081 START TEST spdk_dd_malloc 00:26:27.081 ************************************ 00:26:27.081 05:06:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:26:27.081 * Looking for test storage... 00:26:27.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:27.081 05:06:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:27.081 05:06:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:27.081 05:06:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:27.081 05:06:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:27.081 05:06:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:27.081 05:06:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:27.081 05:06:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:27.081 05:06:50 -- scripts/common.sh@335 -- # IFS=.-: 00:26:27.081 05:06:50 -- scripts/common.sh@335 -- # read -ra ver1 00:26:27.081 05:06:50 -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.081 05:06:50 -- scripts/common.sh@336 -- # read -ra ver2 00:26:27.081 05:06:50 -- scripts/common.sh@337 -- # local 'op=<' 00:26:27.081 05:06:50 -- scripts/common.sh@339 -- # ver1_l=2 00:26:27.081 05:06:50 -- scripts/common.sh@340 -- # ver2_l=1 00:26:27.081 05:06:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:27.081 05:06:50 -- scripts/common.sh@343 -- # case "$op" in 00:26:27.081 05:06:50 -- scripts/common.sh@344 -- # : 1 00:26:27.081 05:06:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:27.081 05:06:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.081 05:06:50 -- scripts/common.sh@364 -- # decimal 1 00:26:27.081 05:06:50 -- scripts/common.sh@352 -- # local d=1 00:26:27.081 05:06:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.081 05:06:50 -- scripts/common.sh@354 -- # echo 1 00:26:27.081 05:06:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:27.081 05:06:50 -- scripts/common.sh@365 -- # decimal 2 00:26:27.081 05:06:50 -- scripts/common.sh@352 -- # local d=2 00:26:27.081 05:06:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.081 05:06:50 -- scripts/common.sh@354 -- # echo 2 00:26:27.081 05:06:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:27.081 05:06:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:27.081 05:06:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:27.081 05:06:50 -- scripts/common.sh@367 -- # return 0 00:26:27.081 05:06:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.081 05:06:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:27.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.081 --rc genhtml_branch_coverage=1 00:26:27.081 --rc genhtml_function_coverage=1 00:26:27.081 --rc genhtml_legend=1 00:26:27.081 --rc geninfo_all_blocks=1 00:26:27.081 --rc geninfo_unexecuted_blocks=1 00:26:27.081 00:26:27.081 ' 00:26:27.081 05:06:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:27.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.081 --rc genhtml_branch_coverage=1 00:26:27.081 --rc genhtml_function_coverage=1 00:26:27.081 --rc genhtml_legend=1 00:26:27.081 --rc geninfo_all_blocks=1 00:26:27.081 --rc geninfo_unexecuted_blocks=1 00:26:27.081 00:26:27.081 ' 00:26:27.081 05:06:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:27.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.081 --rc genhtml_branch_coverage=1 00:26:27.081 --rc genhtml_function_coverage=1 00:26:27.081 --rc genhtml_legend=1 00:26:27.081 --rc geninfo_all_blocks=1 00:26:27.081 --rc geninfo_unexecuted_blocks=1 00:26:27.081 00:26:27.081 ' 00:26:27.081 05:06:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:27.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.081 --rc genhtml_branch_coverage=1 00:26:27.081 --rc genhtml_function_coverage=1 00:26:27.081 --rc genhtml_legend=1 00:26:27.081 --rc geninfo_all_blocks=1 00:26:27.081 --rc geninfo_unexecuted_blocks=1 00:26:27.081 00:26:27.081 ' 00:26:27.081 05:06:50 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:27.341 05:06:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.341 05:06:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.341 05:06:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.341 05:06:50 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:27.341 05:06:50 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:27.341 05:06:50 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:27.341 05:06:50 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:27.341 05:06:50 -- paths/export.sh@6 -- # export PATH 00:26:27.341 05:06:50 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:27.341 05:06:50 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:26:27.341 05:06:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:27.341 05:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:27.341 05:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.341 ************************************ 00:26:27.341 START TEST dd_malloc_copy 00:26:27.341 ************************************ 00:26:27.341 05:06:50 -- common/autotest_common.sh@1114 -- # malloc_copy 00:26:27.341 05:06:50 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:26:27.341 05:06:50 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:26:27.341 05:06:50 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:26:27.341 05:06:50 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:26:27.341 05:06:50 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:26:27.341 05:06:50 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:26:27.341 05:06:50 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:26:27.341 05:06:50 -- dd/malloc.sh@28 -- # gen_conf 00:26:27.341 05:06:50 -- dd/common.sh@31 -- # xtrace_disable 00:26:27.341 05:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.341 { 00:26:27.341 "subsystems": [ 00:26:27.341 { 00:26:27.341 "subsystem": "bdev", 00:26:27.341 "config": [ 00:26:27.341 { 00:26:27.341 "params": { 00:26:27.341 "block_size": 512, 00:26:27.341 "num_blocks": 1048576, 00:26:27.341 "name": "malloc0" 00:26:27.341 }, 00:26:27.341 "method": "bdev_malloc_create" 00:26:27.341 }, 00:26:27.341 { 00:26:27.341 "params": { 00:26:27.341 "block_size": 512, 00:26:27.341 "num_blocks": 1048576, 00:26:27.341 "name": "malloc1" 00:26:27.341 }, 00:26:27.341 "method": "bdev_malloc_create" 00:26:27.341 }, 00:26:27.341 { 00:26:27.341 "method": "bdev_wait_for_examine" 00:26:27.341 } 00:26:27.341 ] 00:26:27.341 } 00:26:27.341 ] 00:26:27.341 } 00:26:27.341 [2024-11-18 05:06:50.671151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:27.341 [2024-11-18 05:06:50.671328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89692 ] 00:26:27.341 [2024-11-18 05:06:50.840314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.600 [2024-11-18 05:06:50.990929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.506  [2024-11-18T05:06:53.967Z] Copying: 213/512 [MB] (213 MBps) [2024-11-18T05:06:54.536Z] Copying: 430/512 [MB] (216 MBps) [2024-11-18T05:06:57.827Z] Copying: 512/512 [MB] (average 214 MBps) 00:26:34.303 00:26:34.303 05:06:57 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:26:34.303 05:06:57 -- dd/malloc.sh@33 -- # gen_conf 00:26:34.303 05:06:57 -- dd/common.sh@31 -- # xtrace_disable 00:26:34.303 05:06:57 -- common/autotest_common.sh@10 -- # set +x 00:26:34.303 { 00:26:34.303 "subsystems": [ 00:26:34.304 { 00:26:34.304 "subsystem": "bdev", 00:26:34.304 "config": [ 00:26:34.304 { 00:26:34.304 "params": { 00:26:34.304 "block_size": 512, 00:26:34.304 "num_blocks": 1048576, 00:26:34.304 "name": "malloc0" 00:26:34.304 }, 00:26:34.304 "method": "bdev_malloc_create" 00:26:34.304 }, 00:26:34.304 { 00:26:34.304 "params": { 00:26:34.304 "block_size": 512, 00:26:34.304 "num_blocks": 1048576, 00:26:34.304 "name": "malloc1" 00:26:34.304 }, 00:26:34.304 "method": "bdev_malloc_create" 00:26:34.304 }, 00:26:34.304 { 00:26:34.304 "method": "bdev_wait_for_examine" 00:26:34.304 } 00:26:34.304 ] 00:26:34.304 } 00:26:34.304 ] 00:26:34.304 } 00:26:34.304 [2024-11-18 05:06:57.211136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:34.304 [2024-11-18 05:06:57.211294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89763 ] 00:26:34.304 [2024-11-18 05:06:57.378251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.304 [2024-11-18 05:06:57.524009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.209  [2024-11-18T05:07:00.670Z] Copying: 214/512 [MB] (214 MBps) [2024-11-18T05:07:00.929Z] Copying: 427/512 [MB] (212 MBps) [2024-11-18T05:07:04.221Z] Copying: 512/512 [MB] (average 214 MBps) 00:26:40.697 00:26:40.697 00:26:40.697 real 0m13.071s 00:26:40.697 user 0m11.906s 00:26:40.697 sys 0m0.972s 00:26:40.697 05:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:40.697 ************************************ 00:26:40.697 END TEST dd_malloc_copy 00:26:40.697 ************************************ 00:26:40.697 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:26:40.697 00:26:40.697 real 0m13.291s 00:26:40.697 user 0m12.034s 00:26:40.697 sys 0m1.075s 00:26:40.697 05:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:40.697 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:26:40.697 ************************************ 00:26:40.697 END TEST spdk_dd_malloc 00:26:40.697 ************************************ 00:26:40.697 05:07:03 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:26:40.697 05:07:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:40.697 05:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:40.697 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:26:40.697 ************************************ 00:26:40.697 START TEST spdk_dd_bdev_to_bdev 00:26:40.697 ************************************ 00:26:40.697 05:07:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:26:40.697 * Looking for test storage... 00:26:40.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:40.697 05:07:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:40.697 05:07:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:40.697 05:07:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:40.697 05:07:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:40.697 05:07:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:40.697 05:07:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:40.697 05:07:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:40.697 05:07:03 -- scripts/common.sh@335 -- # IFS=.-: 00:26:40.697 05:07:03 -- scripts/common.sh@335 -- # read -ra ver1 00:26:40.697 05:07:03 -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.697 05:07:03 -- scripts/common.sh@336 -- # read -ra ver2 00:26:40.697 05:07:03 -- scripts/common.sh@337 -- # local 'op=<' 00:26:40.697 05:07:03 -- scripts/common.sh@339 -- # ver1_l=2 00:26:40.697 05:07:03 -- scripts/common.sh@340 -- # ver2_l=1 00:26:40.697 05:07:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:40.697 05:07:03 -- scripts/common.sh@343 -- # case "$op" in 00:26:40.697 05:07:03 -- scripts/common.sh@344 -- # : 1 00:26:40.697 05:07:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:40.697 05:07:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.697 05:07:03 -- scripts/common.sh@364 -- # decimal 1 00:26:40.697 05:07:03 -- scripts/common.sh@352 -- # local d=1 00:26:40.697 05:07:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.697 05:07:03 -- scripts/common.sh@354 -- # echo 1 00:26:40.697 05:07:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:40.697 05:07:03 -- scripts/common.sh@365 -- # decimal 2 00:26:40.697 05:07:03 -- scripts/common.sh@352 -- # local d=2 00:26:40.697 05:07:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.697 05:07:03 -- scripts/common.sh@354 -- # echo 2 00:26:40.697 05:07:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:40.697 05:07:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:40.697 05:07:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:40.697 05:07:03 -- scripts/common.sh@367 -- # return 0 00:26:40.697 05:07:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.697 05:07:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:40.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.697 --rc genhtml_branch_coverage=1 00:26:40.697 --rc genhtml_function_coverage=1 00:26:40.697 --rc genhtml_legend=1 00:26:40.697 --rc geninfo_all_blocks=1 00:26:40.697 --rc geninfo_unexecuted_blocks=1 00:26:40.697 00:26:40.697 ' 00:26:40.697 05:07:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:40.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.697 --rc genhtml_branch_coverage=1 00:26:40.697 --rc genhtml_function_coverage=1 00:26:40.697 --rc genhtml_legend=1 00:26:40.697 --rc geninfo_all_blocks=1 00:26:40.697 --rc geninfo_unexecuted_blocks=1 00:26:40.697 00:26:40.697 ' 00:26:40.697 05:07:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:40.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.697 --rc genhtml_branch_coverage=1 00:26:40.697 --rc genhtml_function_coverage=1 00:26:40.697 --rc genhtml_legend=1 00:26:40.697 --rc geninfo_all_blocks=1 00:26:40.697 --rc geninfo_unexecuted_blocks=1 00:26:40.697 00:26:40.697 ' 00:26:40.697 05:07:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:40.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.697 --rc genhtml_branch_coverage=1 00:26:40.697 --rc genhtml_function_coverage=1 00:26:40.697 --rc genhtml_legend=1 00:26:40.697 --rc geninfo_all_blocks=1 00:26:40.697 --rc geninfo_unexecuted_blocks=1 00:26:40.697 00:26:40.697 ' 00:26:40.697 05:07:03 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:40.697 05:07:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.698 05:07:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.698 05:07:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.698 05:07:03 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:40.698 05:07:03 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:40.698 05:07:03 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:40.698 05:07:03 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:40.698 05:07:03 -- paths/export.sh@6 -- # export PATH 00:26:40.698 05:07:03 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:26:40.698 05:07:03 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:26:40.698 [2024-11-18 05:07:04.009670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:40.698 [2024-11-18 05:07:04.010535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89906 ] 00:26:40.698 [2024-11-18 05:07:04.191498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.957 [2024-11-18 05:07:04.422159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.540  [2024-11-18T05:07:06.038Z] Copying: 256/256 [MB] (average 1868 MBps) 00:26:42.514 00:26:42.514 05:07:05 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:42.514 05:07:05 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:42.514 05:07:05 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:26:42.514 05:07:05 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:26:42.514 05:07:05 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:26:42.514 05:07:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:26:42.514 05:07:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:42.514 05:07:05 -- common/autotest_common.sh@10 -- # set +x 00:26:42.514 ************************************ 00:26:42.514 START TEST dd_inflate_file 00:26:42.514 ************************************ 00:26:42.514 05:07:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:26:42.514 [2024-11-18 05:07:05.789297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:42.514 [2024-11-18 05:07:05.789451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89923 ] 00:26:42.514 [2024-11-18 05:07:05.954398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.774 [2024-11-18 05:07:06.105305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.034  [2024-11-18T05:07:07.495Z] Copying: 64/64 [MB] (average 1488 MBps) 00:26:43.971 00:26:43.971 00:26:43.971 real 0m1.560s 00:26:43.971 user 0m1.206s 00:26:43.971 sys 0m0.237s 00:26:43.971 05:07:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:43.971 ************************************ 00:26:43.971 END TEST dd_inflate_file 00:26:43.971 ************************************ 00:26:43.971 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:26:43.971 05:07:07 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:26:43.971 05:07:07 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:26:43.971 05:07:07 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:26:43.971 05:07:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:26:43.971 05:07:07 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:26:43.971 05:07:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:43.971 05:07:07 -- dd/common.sh@31 -- # xtrace_disable 00:26:43.971 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:26:43.971 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:26:43.971 ************************************ 00:26:43.971 START TEST dd_copy_to_out_bdev 00:26:43.971 ************************************ 00:26:43.971 05:07:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:26:43.971 { 00:26:43.971 "subsystems": [ 00:26:43.971 { 00:26:43.971 "subsystem": "bdev", 00:26:43.971 "config": [ 00:26:43.971 { 00:26:43.971 "params": { 00:26:43.972 "block_size": 4096, 00:26:43.972 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:43.972 "name": "aio1" 00:26:43.972 }, 00:26:43.972 "method": "bdev_aio_create" 00:26:43.972 }, 00:26:43.972 { 00:26:43.972 "params": { 00:26:43.972 "trtype": "pcie", 00:26:43.972 "traddr": "0000:00:06.0", 00:26:43.972 "name": "Nvme0" 00:26:43.972 }, 00:26:43.972 "method": "bdev_nvme_attach_controller" 00:26:43.972 }, 00:26:43.972 { 00:26:43.972 "method": "bdev_wait_for_examine" 00:26:43.972 } 00:26:43.972 ] 00:26:43.972 } 00:26:43.972 ] 00:26:43.972 } 00:26:43.972 [2024-11-18 05:07:07.414724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:43.972 [2024-11-18 05:07:07.414894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89967 ] 00:26:44.230 [2024-11-18 05:07:07.581954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.230 [2024-11-18 05:07:07.736345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.613  [2024-11-18T05:07:09.706Z] Copying: 40/64 [MB] (40 MBps) [2024-11-18T05:07:10.643Z] Copying: 64/64 [MB] (average 40 MBps) 00:26:47.119 00:26:47.119 00:26:47.119 real 0m3.180s 00:26:47.119 user 0m2.778s 00:26:47.119 sys 0m0.299s 00:26:47.119 05:07:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:47.119 05:07:10 -- common/autotest_common.sh@10 -- # set +x 00:26:47.119 ************************************ 00:26:47.119 END TEST dd_copy_to_out_bdev 00:26:47.119 ************************************ 00:26:47.119 05:07:10 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:26:47.119 05:07:10 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:26:47.119 05:07:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:47.119 05:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:47.119 05:07:10 -- common/autotest_common.sh@10 -- # set +x 00:26:47.119 ************************************ 00:26:47.119 START TEST dd_offset_magic 00:26:47.119 ************************************ 00:26:47.119 05:07:10 -- common/autotest_common.sh@1114 -- # offset_magic 00:26:47.119 05:07:10 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:26:47.119 05:07:10 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:26:47.119 05:07:10 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:26:47.119 05:07:10 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:47.119 05:07:10 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:26:47.119 05:07:10 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:47.119 05:07:10 -- dd/common.sh@31 -- # xtrace_disable 00:26:47.119 05:07:10 -- common/autotest_common.sh@10 -- # set +x 00:26:47.119 { 00:26:47.119 "subsystems": [ 00:26:47.119 { 00:26:47.119 "subsystem": "bdev", 00:26:47.119 "config": [ 00:26:47.119 { 00:26:47.119 "params": { 00:26:47.119 "block_size": 4096, 00:26:47.119 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:47.119 "name": "aio1" 00:26:47.119 }, 00:26:47.119 "method": "bdev_aio_create" 00:26:47.119 }, 00:26:47.119 { 00:26:47.119 "params": { 00:26:47.119 "trtype": "pcie", 00:26:47.119 "traddr": "0000:00:06.0", 00:26:47.119 "name": "Nvme0" 00:26:47.119 }, 00:26:47.119 "method": "bdev_nvme_attach_controller" 00:26:47.119 }, 00:26:47.119 { 00:26:47.119 "method": "bdev_wait_for_examine" 00:26:47.119 } 00:26:47.119 ] 00:26:47.119 } 00:26:47.119 ] 00:26:47.119 } 00:26:47.379 [2024-11-18 05:07:10.652815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:47.379 [2024-11-18 05:07:10.652981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90023 ] 00:26:47.379 [2024-11-18 05:07:10.823613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.638 [2024-11-18 05:07:10.976857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.208  [2024-11-18T05:07:12.670Z] Copying: 65/65 [MB] (average 154 MBps) 00:26:49.146 00:26:49.146 05:07:12 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:26:49.146 05:07:12 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:49.146 05:07:12 -- dd/common.sh@31 -- # xtrace_disable 00:26:49.146 05:07:12 -- common/autotest_common.sh@10 -- # set +x 00:26:49.146 { 00:26:49.146 "subsystems": [ 00:26:49.146 { 00:26:49.146 "subsystem": "bdev", 00:26:49.146 "config": [ 00:26:49.146 { 00:26:49.146 "params": { 00:26:49.146 "block_size": 4096, 00:26:49.146 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:49.146 "name": "aio1" 00:26:49.146 }, 00:26:49.146 "method": "bdev_aio_create" 00:26:49.146 }, 00:26:49.146 { 00:26:49.146 "params": { 00:26:49.146 "trtype": "pcie", 00:26:49.146 "traddr": "0000:00:06.0", 00:26:49.146 "name": "Nvme0" 00:26:49.146 }, 00:26:49.146 "method": "bdev_nvme_attach_controller" 00:26:49.146 }, 00:26:49.146 { 00:26:49.146 "method": "bdev_wait_for_examine" 00:26:49.146 } 00:26:49.146 ] 00:26:49.146 } 00:26:49.146 ] 00:26:49.146 } 00:26:49.405 [2024-11-18 05:07:12.692681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:49.405 [2024-11-18 05:07:12.693337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90051 ] 00:26:49.405 [2024-11-18 05:07:12.862849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.664 [2024-11-18 05:07:13.015518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.923  [2024-11-18T05:07:14.386Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:26:50.862 00:26:50.862 05:07:14 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:50.862 05:07:14 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:50.862 05:07:14 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:50.862 05:07:14 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:26:50.862 05:07:14 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:50.862 05:07:14 -- dd/common.sh@31 -- # xtrace_disable 00:26:50.862 05:07:14 -- common/autotest_common.sh@10 -- # set +x 00:26:50.862 { 00:26:50.862 "subsystems": [ 00:26:50.862 { 00:26:50.862 "subsystem": "bdev", 00:26:50.862 "config": [ 00:26:50.862 { 00:26:50.862 "params": { 00:26:50.862 "block_size": 4096, 00:26:50.862 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:50.862 "name": "aio1" 00:26:50.862 }, 00:26:50.862 "method": "bdev_aio_create" 00:26:50.862 }, 00:26:50.862 { 00:26:50.862 "params": { 00:26:50.862 "trtype": "pcie", 00:26:50.862 "traddr": "0000:00:06.0", 00:26:50.862 "name": "Nvme0" 00:26:50.862 }, 00:26:50.862 "method": "bdev_nvme_attach_controller" 00:26:50.862 }, 00:26:50.862 { 00:26:50.862 "method": "bdev_wait_for_examine" 00:26:50.862 } 00:26:50.862 ] 00:26:50.862 } 00:26:50.862 ] 00:26:50.862 } 00:26:50.862 [2024-11-18 05:07:14.300248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:50.862 [2024-11-18 05:07:14.300410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90082 ] 00:26:51.121 [2024-11-18 05:07:14.473662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.121 [2024-11-18 05:07:14.636993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.690  [2024-11-18T05:07:16.152Z] Copying: 65/65 [MB] (average 1083 MBps) 00:26:52.628 00:26:52.628 05:07:15 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:26:52.628 05:07:15 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:52.628 05:07:15 -- dd/common.sh@31 -- # xtrace_disable 00:26:52.628 05:07:15 -- common/autotest_common.sh@10 -- # set +x 00:26:52.628 { 00:26:52.628 "subsystems": [ 00:26:52.628 { 00:26:52.628 "subsystem": "bdev", 00:26:52.628 "config": [ 00:26:52.628 { 00:26:52.628 "params": { 00:26:52.628 "block_size": 4096, 00:26:52.628 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:52.628 "name": "aio1" 00:26:52.628 }, 00:26:52.628 "method": "bdev_aio_create" 00:26:52.628 }, 00:26:52.628 { 00:26:52.628 "params": { 00:26:52.628 "trtype": "pcie", 00:26:52.628 "traddr": "0000:00:06.0", 00:26:52.628 "name": "Nvme0" 00:26:52.628 }, 00:26:52.628 "method": "bdev_nvme_attach_controller" 00:26:52.628 }, 00:26:52.628 { 00:26:52.628 "method": "bdev_wait_for_examine" 00:26:52.628 } 00:26:52.628 ] 00:26:52.628 } 00:26:52.628 ] 00:26:52.628 } 00:26:52.628 [2024-11-18 05:07:16.008702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:52.628 [2024-11-18 05:07:16.008909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90103 ] 00:26:52.887 [2024-11-18 05:07:16.176016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.887 [2024-11-18 05:07:16.328105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.146  [2024-11-18T05:07:17.612Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:54.088 00:26:54.088 05:07:17 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:54.088 05:07:17 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:54.088 00:26:54.088 real 0m6.915s 00:26:54.088 user 0m5.123s 00:26:54.088 sys 0m0.989s 00:26:54.088 ************************************ 00:26:54.088 END TEST dd_offset_magic 00:26:54.088 ************************************ 00:26:54.088 05:07:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:54.088 05:07:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.088 05:07:17 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:26:54.088 05:07:17 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:26:54.088 05:07:17 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:54.088 05:07:17 -- dd/common.sh@11 -- # local nvme_ref= 00:26:54.088 05:07:17 -- dd/common.sh@12 -- # local size=4194330 00:26:54.088 05:07:17 -- dd/common.sh@14 -- # local bs=1048576 00:26:54.088 05:07:17 -- dd/common.sh@15 -- # local count=5 00:26:54.088 05:07:17 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:26:54.088 05:07:17 -- dd/common.sh@18 -- # gen_conf 00:26:54.088 05:07:17 -- dd/common.sh@31 -- # xtrace_disable 00:26:54.088 05:07:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.088 { 00:26:54.088 "subsystems": [ 00:26:54.088 { 00:26:54.088 "subsystem": "bdev", 00:26:54.088 "config": [ 00:26:54.088 { 00:26:54.088 "params": { 00:26:54.088 "block_size": 4096, 00:26:54.088 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:54.088 "name": "aio1" 00:26:54.088 }, 00:26:54.088 "method": "bdev_aio_create" 00:26:54.088 }, 00:26:54.088 { 00:26:54.088 "params": { 00:26:54.088 "trtype": "pcie", 00:26:54.088 "traddr": "0000:00:06.0", 00:26:54.088 "name": "Nvme0" 00:26:54.088 }, 00:26:54.088 "method": "bdev_nvme_attach_controller" 00:26:54.088 }, 00:26:54.088 { 00:26:54.088 "method": "bdev_wait_for_examine" 00:26:54.088 } 00:26:54.088 ] 00:26:54.088 } 00:26:54.088 ] 00:26:54.088 } 00:26:54.348 [2024-11-18 05:07:17.606203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:54.348 [2024-11-18 05:07:17.606364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90145 ] 00:26:54.348 [2024-11-18 05:07:17.776177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.607 [2024-11-18 05:07:17.931058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.866  [2024-11-18T05:07:19.328Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:26:55.804 00:26:55.804 05:07:19 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:26:55.804 05:07:19 -- dd/common.sh@10 -- # local bdev=aio1 00:26:55.804 05:07:19 -- dd/common.sh@11 -- # local nvme_ref= 00:26:55.804 05:07:19 -- dd/common.sh@12 -- # local size=4194330 00:26:55.804 05:07:19 -- dd/common.sh@14 -- # local bs=1048576 00:26:55.804 05:07:19 -- dd/common.sh@15 -- # local count=5 00:26:55.804 05:07:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:26:55.804 05:07:19 -- dd/common.sh@18 -- # gen_conf 00:26:55.804 05:07:19 -- dd/common.sh@31 -- # xtrace_disable 00:26:55.804 05:07:19 -- common/autotest_common.sh@10 -- # set +x 00:26:55.804 { 00:26:55.804 "subsystems": [ 00:26:55.804 { 00:26:55.804 "subsystem": "bdev", 00:26:55.804 "config": [ 00:26:55.804 { 00:26:55.804 "params": { 00:26:55.804 "block_size": 4096, 00:26:55.804 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:55.804 "name": "aio1" 00:26:55.804 }, 00:26:55.804 "method": "bdev_aio_create" 00:26:55.804 }, 00:26:55.804 { 00:26:55.804 "params": { 00:26:55.804 "trtype": "pcie", 00:26:55.804 "traddr": "0000:00:06.0", 00:26:55.804 "name": "Nvme0" 00:26:55.804 }, 00:26:55.804 "method": "bdev_nvme_attach_controller" 00:26:55.804 }, 00:26:55.804 { 00:26:55.804 "method": "bdev_wait_for_examine" 00:26:55.804 } 00:26:55.804 ] 00:26:55.804 } 00:26:55.804 ] 00:26:55.804 } 00:26:55.804 [2024-11-18 05:07:19.232669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:55.804 [2024-11-18 05:07:19.232983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90176 ] 00:26:56.063 [2024-11-18 05:07:19.403030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.063 [2024-11-18 05:07:19.555641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.631  [2024-11-18T05:07:21.093Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:26:57.569 00:26:57.569 05:07:20 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:26:57.569 00:26:57.569 real 0m17.027s 00:26:57.569 user 0m13.114s 00:26:57.569 sys 0m2.545s 00:26:57.569 ************************************ 00:26:57.569 END TEST spdk_dd_bdev_to_bdev 00:26:57.569 ************************************ 00:26:57.569 05:07:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:57.569 05:07:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.569 05:07:20 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:26:57.569 05:07:20 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:26:57.569 05:07:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:57.569 05:07:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:57.569 05:07:20 -- common/autotest_common.sh@10 -- # set +x 00:26:57.569 ************************************ 00:26:57.569 START TEST spdk_dd_sparse 00:26:57.569 ************************************ 00:26:57.569 05:07:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:26:57.569 * Looking for test storage... 00:26:57.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:57.569 05:07:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:57.569 05:07:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:57.569 05:07:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:57.569 05:07:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:57.569 05:07:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:57.569 05:07:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:57.569 05:07:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:57.569 05:07:21 -- scripts/common.sh@335 -- # IFS=.-: 00:26:57.569 05:07:21 -- scripts/common.sh@335 -- # read -ra ver1 00:26:57.569 05:07:21 -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.569 05:07:21 -- scripts/common.sh@336 -- # read -ra ver2 00:26:57.569 05:07:21 -- scripts/common.sh@337 -- # local 'op=<' 00:26:57.569 05:07:21 -- scripts/common.sh@339 -- # ver1_l=2 00:26:57.569 05:07:21 -- scripts/common.sh@340 -- # ver2_l=1 00:26:57.569 05:07:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:57.569 05:07:21 -- scripts/common.sh@343 -- # case "$op" in 00:26:57.569 05:07:21 -- scripts/common.sh@344 -- # : 1 00:26:57.569 05:07:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:57.569 05:07:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.569 05:07:21 -- scripts/common.sh@364 -- # decimal 1 00:26:57.569 05:07:21 -- scripts/common.sh@352 -- # local d=1 00:26:57.569 05:07:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.569 05:07:21 -- scripts/common.sh@354 -- # echo 1 00:26:57.569 05:07:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:57.569 05:07:21 -- scripts/common.sh@365 -- # decimal 2 00:26:57.569 05:07:21 -- scripts/common.sh@352 -- # local d=2 00:26:57.569 05:07:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.569 05:07:21 -- scripts/common.sh@354 -- # echo 2 00:26:57.569 05:07:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:57.569 05:07:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:57.569 05:07:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:57.569 05:07:21 -- scripts/common.sh@367 -- # return 0 00:26:57.569 05:07:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.569 05:07:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:57.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.569 --rc genhtml_branch_coverage=1 00:26:57.569 --rc genhtml_function_coverage=1 00:26:57.569 --rc genhtml_legend=1 00:26:57.569 --rc geninfo_all_blocks=1 00:26:57.569 --rc geninfo_unexecuted_blocks=1 00:26:57.569 00:26:57.569 ' 00:26:57.569 05:07:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:57.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.569 --rc genhtml_branch_coverage=1 00:26:57.569 --rc genhtml_function_coverage=1 00:26:57.569 --rc genhtml_legend=1 00:26:57.569 --rc geninfo_all_blocks=1 00:26:57.569 --rc geninfo_unexecuted_blocks=1 00:26:57.569 00:26:57.569 ' 00:26:57.569 05:07:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:57.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.569 --rc genhtml_branch_coverage=1 00:26:57.569 --rc genhtml_function_coverage=1 00:26:57.569 --rc genhtml_legend=1 00:26:57.569 --rc geninfo_all_blocks=1 00:26:57.569 --rc geninfo_unexecuted_blocks=1 00:26:57.569 00:26:57.569 ' 00:26:57.569 05:07:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:57.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.569 --rc genhtml_branch_coverage=1 00:26:57.569 --rc genhtml_function_coverage=1 00:26:57.569 --rc genhtml_legend=1 00:26:57.569 --rc geninfo_all_blocks=1 00:26:57.569 --rc geninfo_unexecuted_blocks=1 00:26:57.569 00:26:57.569 ' 00:26:57.569 05:07:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:57.569 05:07:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.569 05:07:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.569 05:07:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.569 05:07:21 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.569 05:07:21 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.569 05:07:21 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.570 05:07:21 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.570 05:07:21 -- paths/export.sh@6 -- # export PATH 00:26:57.570 05:07:21 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.570 05:07:21 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:26:57.570 05:07:21 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:26:57.570 05:07:21 -- dd/sparse.sh@110 -- # file1=file_zero1 00:26:57.570 05:07:21 -- dd/sparse.sh@111 -- # file2=file_zero2 00:26:57.570 05:07:21 -- dd/sparse.sh@112 -- # file3=file_zero3 00:26:57.570 05:07:21 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:26:57.570 05:07:21 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:26:57.570 05:07:21 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:26:57.570 05:07:21 -- dd/sparse.sh@118 -- # prepare 00:26:57.570 05:07:21 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:26:57.570 05:07:21 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:26:57.570 1+0 records in 00:26:57.570 1+0 records out 00:26:57.570 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00810898 s, 517 MB/s 00:26:57.570 05:07:21 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:26:57.570 1+0 records in 00:26:57.570 1+0 records out 00:26:57.570 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00779527 s, 538 MB/s 00:26:57.570 05:07:21 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:26:57.570 1+0 records in 00:26:57.570 1+0 records out 00:26:57.570 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00823193 s, 510 MB/s 00:26:57.570 05:07:21 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:26:57.570 05:07:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:57.570 05:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:57.570 05:07:21 -- common/autotest_common.sh@10 -- # set +x 00:26:57.829 ************************************ 00:26:57.829 START TEST dd_sparse_file_to_file 00:26:57.829 ************************************ 00:26:57.829 05:07:21 -- common/autotest_common.sh@1114 -- # file_to_file 00:26:57.829 05:07:21 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:26:57.829 05:07:21 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:26:57.829 05:07:21 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:26:57.829 05:07:21 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:26:57.829 05:07:21 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:26:57.829 05:07:21 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:26:57.829 05:07:21 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:26:57.829 05:07:21 -- dd/sparse.sh@41 -- # gen_conf 00:26:57.829 05:07:21 -- dd/common.sh@31 -- # xtrace_disable 00:26:57.829 05:07:21 -- common/autotest_common.sh@10 -- # set +x 00:26:57.829 { 00:26:57.829 "subsystems": [ 00:26:57.829 { 00:26:57.829 "subsystem": "bdev", 00:26:57.829 "config": [ 00:26:57.829 { 00:26:57.829 "params": { 00:26:57.829 "block_size": 4096, 00:26:57.829 "filename": "dd_sparse_aio_disk", 00:26:57.829 "name": "dd_aio" 00:26:57.829 }, 00:26:57.829 "method": "bdev_aio_create" 00:26:57.829 }, 00:26:57.829 { 00:26:57.829 "params": { 00:26:57.829 "lvs_name": "dd_lvstore", 00:26:57.829 "bdev_name": "dd_aio" 00:26:57.829 }, 00:26:57.829 "method": "bdev_lvol_create_lvstore" 00:26:57.829 }, 00:26:57.829 { 00:26:57.829 "method": "bdev_wait_for_examine" 00:26:57.829 } 00:26:57.829 ] 00:26:57.829 } 00:26:57.829 ] 00:26:57.829 } 00:26:57.829 [2024-11-18 05:07:21.145869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:57.829 [2024-11-18 05:07:21.146004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90253 ] 00:26:57.829 [2024-11-18 05:07:21.299590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.089 [2024-11-18 05:07:21.449090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.348  [2024-11-18T05:07:22.809Z] Copying: 12/36 [MB] (average 1333 MBps) 00:26:59.285 00:26:59.285 05:07:22 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:26:59.285 05:07:22 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:26:59.285 05:07:22 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:26:59.285 05:07:22 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:26:59.285 05:07:22 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:26:59.285 05:07:22 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:26:59.285 05:07:22 -- dd/sparse.sh@52 -- # stat1_b=24576 00:26:59.285 05:07:22 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:26:59.285 05:07:22 -- dd/sparse.sh@53 -- # stat2_b=24576 00:26:59.285 05:07:22 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:26:59.285 00:26:59.285 real 0m1.661s 00:26:59.285 user 0m1.316s 00:26:59.285 sys 0m0.223s 00:26:59.285 05:07:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:59.285 05:07:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.285 ************************************ 00:26:59.285 END TEST dd_sparse_file_to_file 00:26:59.285 ************************************ 00:26:59.285 05:07:22 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:26:59.285 05:07:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:59.285 05:07:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:59.285 05:07:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.544 ************************************ 00:26:59.544 START TEST dd_sparse_file_to_bdev 00:26:59.544 ************************************ 00:26:59.544 05:07:22 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:26:59.544 05:07:22 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:26:59.544 05:07:22 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:26:59.544 05:07:22 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:26:59.544 05:07:22 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:26:59.544 05:07:22 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:26:59.544 05:07:22 -- dd/sparse.sh@73 -- # gen_conf 00:26:59.544 05:07:22 -- dd/common.sh@31 -- # xtrace_disable 00:26:59.544 05:07:22 -- common/autotest_common.sh@10 -- # set +x 00:26:59.544 { 00:26:59.544 "subsystems": [ 00:26:59.544 { 00:26:59.544 "subsystem": "bdev", 00:26:59.544 "config": [ 00:26:59.544 { 00:26:59.544 "params": { 00:26:59.544 "block_size": 4096, 00:26:59.544 "filename": "dd_sparse_aio_disk", 00:26:59.544 "name": "dd_aio" 00:26:59.544 }, 00:26:59.544 "method": "bdev_aio_create" 00:26:59.544 }, 00:26:59.544 { 00:26:59.544 "params": { 00:26:59.544 "lvs_name": "dd_lvstore", 00:26:59.544 "lvol_name": "dd_lvol", 00:26:59.544 "size": 37748736, 00:26:59.544 "thin_provision": true 00:26:59.544 }, 00:26:59.545 "method": "bdev_lvol_create" 00:26:59.545 }, 00:26:59.545 { 00:26:59.545 "method": "bdev_wait_for_examine" 00:26:59.545 } 00:26:59.545 ] 00:26:59.545 } 00:26:59.545 ] 00:26:59.545 } 00:26:59.545 [2024-11-18 05:07:22.870009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:59.545 [2024-11-18 05:07:22.870216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90309 ] 00:26:59.545 [2024-11-18 05:07:23.038043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.803 [2024-11-18 05:07:23.187943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.062 [2024-11-18 05:07:23.417029] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:27:00.062  [2024-11-18T05:07:23.586Z] Copying: 12/36 [MB] (average 521 MBps)[2024-11-18 05:07:23.469280] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:27:00.999 00:27:00.999 00:27:00.999 00:27:00.999 real 0m1.659s 00:27:00.999 user 0m1.333s 00:27:00.999 sys 0m0.216s 00:27:00.999 05:07:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:00.999 05:07:24 -- common/autotest_common.sh@10 -- # set +x 00:27:00.999 ************************************ 00:27:00.999 END TEST dd_sparse_file_to_bdev 00:27:00.999 ************************************ 00:27:00.999 05:07:24 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:27:00.999 05:07:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:00.999 05:07:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:00.999 05:07:24 -- common/autotest_common.sh@10 -- # set +x 00:27:01.258 ************************************ 00:27:01.258 START TEST dd_sparse_bdev_to_file 00:27:01.258 ************************************ 00:27:01.258 05:07:24 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:27:01.258 05:07:24 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:27:01.258 05:07:24 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:27:01.258 05:07:24 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:01.258 05:07:24 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:27:01.258 05:07:24 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:27:01.258 05:07:24 -- dd/sparse.sh@91 -- # gen_conf 00:27:01.258 05:07:24 -- dd/common.sh@31 -- # xtrace_disable 00:27:01.258 05:07:24 -- common/autotest_common.sh@10 -- # set +x 00:27:01.258 { 00:27:01.258 "subsystems": [ 00:27:01.258 { 00:27:01.258 "subsystem": "bdev", 00:27:01.258 "config": [ 00:27:01.258 { 00:27:01.258 "params": { 00:27:01.258 "block_size": 4096, 00:27:01.258 "filename": "dd_sparse_aio_disk", 00:27:01.258 "name": "dd_aio" 00:27:01.258 }, 00:27:01.258 "method": "bdev_aio_create" 00:27:01.258 }, 00:27:01.258 { 00:27:01.258 "method": "bdev_wait_for_examine" 00:27:01.258 } 00:27:01.258 ] 00:27:01.258 } 00:27:01.258 ] 00:27:01.258 } 00:27:01.258 [2024-11-18 05:07:24.569519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:01.258 [2024-11-18 05:07:24.569694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90354 ] 00:27:01.258 [2024-11-18 05:07:24.721284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.517 [2024-11-18 05:07:24.870949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.776  [2024-11-18T05:07:26.238Z] Copying: 12/36 [MB] (average 1333 MBps) 00:27:02.714 00:27:02.714 05:07:26 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:27:02.714 05:07:26 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:27:02.714 05:07:26 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:27:02.714 05:07:26 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:27:02.714 05:07:26 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:02.714 05:07:26 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:27:02.714 05:07:26 -- dd/sparse.sh@102 -- # stat2_b=24576 00:27:02.714 05:07:26 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:27:02.714 05:07:26 -- dd/sparse.sh@103 -- # stat3_b=24576 00:27:02.714 05:07:26 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:02.714 00:27:02.714 real 0m1.618s 00:27:02.714 user 0m1.297s 00:27:02.714 sys 0m0.212s 00:27:02.714 05:07:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:02.714 ************************************ 00:27:02.714 END TEST dd_sparse_bdev_to_file 00:27:02.714 ************************************ 00:27:02.714 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:27:02.714 05:07:26 -- dd/sparse.sh@1 -- # cleanup 00:27:02.714 05:07:26 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:27:02.714 05:07:26 -- dd/sparse.sh@12 -- # rm file_zero1 00:27:02.714 05:07:26 -- dd/sparse.sh@13 -- # rm file_zero2 00:27:02.714 05:07:26 -- dd/sparse.sh@14 -- # rm file_zero3 00:27:02.714 00:27:02.714 real 0m5.357s 00:27:02.714 user 0m4.117s 00:27:02.714 sys 0m0.888s 00:27:02.714 05:07:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:02.714 ************************************ 00:27:02.714 END TEST spdk_dd_sparse 00:27:02.714 ************************************ 00:27:02.714 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:27:02.973 05:07:26 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:02.973 05:07:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:02.973 05:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:02.973 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:27:02.973 ************************************ 00:27:02.973 START TEST spdk_dd_negative 00:27:02.973 ************************************ 00:27:02.973 05:07:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:02.973 * Looking for test storage... 00:27:02.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:02.973 05:07:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:02.973 05:07:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:02.973 05:07:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:02.973 05:07:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:02.973 05:07:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:02.973 05:07:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:02.973 05:07:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:02.973 05:07:26 -- scripts/common.sh@335 -- # IFS=.-: 00:27:02.973 05:07:26 -- scripts/common.sh@335 -- # read -ra ver1 00:27:02.973 05:07:26 -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.973 05:07:26 -- scripts/common.sh@336 -- # read -ra ver2 00:27:02.973 05:07:26 -- scripts/common.sh@337 -- # local 'op=<' 00:27:02.973 05:07:26 -- scripts/common.sh@339 -- # ver1_l=2 00:27:02.973 05:07:26 -- scripts/common.sh@340 -- # ver2_l=1 00:27:02.973 05:07:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:02.973 05:07:26 -- scripts/common.sh@343 -- # case "$op" in 00:27:02.973 05:07:26 -- scripts/common.sh@344 -- # : 1 00:27:02.973 05:07:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:02.974 05:07:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.974 05:07:26 -- scripts/common.sh@364 -- # decimal 1 00:27:02.974 05:07:26 -- scripts/common.sh@352 -- # local d=1 00:27:02.974 05:07:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.974 05:07:26 -- scripts/common.sh@354 -- # echo 1 00:27:02.974 05:07:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:02.974 05:07:26 -- scripts/common.sh@365 -- # decimal 2 00:27:02.974 05:07:26 -- scripts/common.sh@352 -- # local d=2 00:27:02.974 05:07:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.974 05:07:26 -- scripts/common.sh@354 -- # echo 2 00:27:02.974 05:07:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:02.974 05:07:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:02.974 05:07:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:02.974 05:07:26 -- scripts/common.sh@367 -- # return 0 00:27:02.974 05:07:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.974 05:07:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:02.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.974 --rc genhtml_branch_coverage=1 00:27:02.974 --rc genhtml_function_coverage=1 00:27:02.974 --rc genhtml_legend=1 00:27:02.974 --rc geninfo_all_blocks=1 00:27:02.974 --rc geninfo_unexecuted_blocks=1 00:27:02.974 00:27:02.974 ' 00:27:02.974 05:07:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:02.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.974 --rc genhtml_branch_coverage=1 00:27:02.974 --rc genhtml_function_coverage=1 00:27:02.974 --rc genhtml_legend=1 00:27:02.974 --rc geninfo_all_blocks=1 00:27:02.974 --rc geninfo_unexecuted_blocks=1 00:27:02.974 00:27:02.974 ' 00:27:02.974 05:07:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:02.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.974 --rc genhtml_branch_coverage=1 00:27:02.974 --rc genhtml_function_coverage=1 00:27:02.974 --rc genhtml_legend=1 00:27:02.974 --rc geninfo_all_blocks=1 00:27:02.974 --rc geninfo_unexecuted_blocks=1 00:27:02.974 00:27:02.974 ' 00:27:02.974 05:07:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:02.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.974 --rc genhtml_branch_coverage=1 00:27:02.974 --rc genhtml_function_coverage=1 00:27:02.974 --rc genhtml_legend=1 00:27:02.974 --rc geninfo_all_blocks=1 00:27:02.974 --rc geninfo_unexecuted_blocks=1 00:27:02.974 00:27:02.974 ' 00:27:02.974 05:07:26 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:02.974 05:07:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.974 05:07:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.974 05:07:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.974 05:07:26 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.974 05:07:26 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.974 05:07:26 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.974 05:07:26 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.974 05:07:26 -- paths/export.sh@6 -- # export PATH 00:27:02.974 05:07:26 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.974 05:07:26 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:02.974 05:07:26 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:02.974 05:07:26 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:02.974 05:07:26 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:02.974 05:07:26 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:27:02.974 05:07:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:02.974 05:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:02.974 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:27:02.974 ************************************ 00:27:02.974 START TEST dd_invalid_arguments 00:27:02.974 ************************************ 00:27:02.974 05:07:26 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:27:02.974 05:07:26 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:02.974 05:07:26 -- common/autotest_common.sh@650 -- # local es=0 00:27:02.974 05:07:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:02.974 05:07:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:02.974 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:02.974 05:07:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:02.974 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:02.974 05:07:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:02.974 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:02.974 05:07:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:02.974 05:07:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:02.974 05:07:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:03.234 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:27:03.234 options: 00:27:03.234 -c, --config JSON config file (default none) 00:27:03.234 --json JSON config file (default none) 00:27:03.234 --json-ignore-init-errors 00:27:03.234 don't exit on invalid config entry 00:27:03.234 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:27:03.234 -g, --single-file-segments 00:27:03.234 force creating just one hugetlbfs file 00:27:03.234 -h, --help show this usage 00:27:03.234 -i, --shm-id shared memory ID (optional) 00:27:03.234 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:27:03.234 --lcores lcore to CPU mapping list. The list is in the format: 00:27:03.234 [<,lcores[@CPUs]>...] 00:27:03.234 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:27:03.234 Within the group, '-' is used for range separator, 00:27:03.234 ',' is used for single number separator. 00:27:03.234 '( )' can be omitted for single element group, 00:27:03.234 '@' can be omitted if cpus and lcores have the same value 00:27:03.234 -n, --mem-channels channel number of memory channels used for DPDK 00:27:03.234 -p, --main-core main (primary) core for DPDK 00:27:03.234 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:27:03.234 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:27:03.234 --disable-cpumask-locks Disable CPU core lock files. 00:27:03.234 --silence-noticelog disable notice level logging to stderr 00:27:03.234 --msg-mempool-size global message memory pool size in count (default: 262143) 00:27:03.234 -u, --no-pci disable PCI access 00:27:03.234 --wait-for-rpc wait for RPCs to initialize subsystems 00:27:03.234 --max-delay maximum reactor delay (in microseconds) 00:27:03.234 -B, --pci-blocked pci addr to block (can be used more than once) 00:27:03.234 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:27:03.234 -R, --huge-unlink unlink huge files after initialization 00:27:03.234 -v, --version print SPDK version 00:27:03.234 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:27:03.234 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:27:03.234 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:27:03.234 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:27:03.234 Tracepoints vary in size and can use more than one trace entry. 00:27:03.234 --rpcs-allowed comma-separated list of permitted RPCS 00:27:03.234 --env-context Opaque context for use of the env implementation 00:27:03.234 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:27:03.234 --no-huge run without using hugepages 00:27:03.234 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:27:03.234 -e, --tpoint-group [:] 00:27:03.234 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:27:03.234 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:27:03.234 Groups and /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:27:03.234 [2024-11-18 05:07:26.503067] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:27:03.234 masks can be combined (e.g. thread,bdev:0x1). 00:27:03.234 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:27:03.234 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:27:03.234 [--------- DD Options ---------] 00:27:03.234 --if Input file. Must specify either --if or --ib. 00:27:03.234 --ib Input bdev. Must specifier either --if or --ib 00:27:03.234 --of Output file. Must specify either --of or --ob. 00:27:03.234 --ob Output bdev. Must specify either --of or --ob. 00:27:03.234 --iflag Input file flags. 00:27:03.234 --oflag Output file flags. 00:27:03.234 --bs I/O unit size (default: 4096) 00:27:03.234 --qd Queue depth (default: 2) 00:27:03.234 --count I/O unit count. The number of I/O units to copy. (default: all) 00:27:03.234 --skip Skip this many I/O units at start of input. (default: 0) 00:27:03.234 --seek Skip this many I/O units at start of output. (default: 0) 00:27:03.234 --aio Force usage of AIO. (by default io_uring is used if available) 00:27:03.234 --sparse Enable hole skipping in input target 00:27:03.234 Available iflag and oflag values: 00:27:03.234 append - append mode 00:27:03.234 direct - use direct I/O for data 00:27:03.234 directory - fail unless a directory 00:27:03.234 dsync - use synchronized I/O for data 00:27:03.234 noatime - do not update access time 00:27:03.234 noctty - do not assign controlling terminal from file 00:27:03.234 nofollow - do not follow symlinks 00:27:03.234 nonblock - use non-blocking I/O 00:27:03.234 sync - use synchronized I/O for data and metadata 00:27:03.234 05:07:26 -- common/autotest_common.sh@653 -- # es=2 00:27:03.234 05:07:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.234 05:07:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.234 05:07:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.234 00:27:03.234 real 0m0.111s 00:27:03.234 user 0m0.062s 00:27:03.234 sys 0m0.049s 00:27:03.234 05:07:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.234 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.234 ************************************ 00:27:03.234 END TEST dd_invalid_arguments 00:27:03.235 ************************************ 00:27:03.235 05:07:26 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:27:03.235 05:07:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:03.235 05:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.235 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.235 ************************************ 00:27:03.235 START TEST dd_double_input 00:27:03.235 ************************************ 00:27:03.235 05:07:26 -- common/autotest_common.sh@1114 -- # double_input 00:27:03.235 05:07:26 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:03.235 05:07:26 -- common/autotest_common.sh@650 -- # local es=0 00:27:03.235 05:07:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:03.235 05:07:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.235 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.235 05:07:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.235 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.235 05:07:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.235 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.235 05:07:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.235 05:07:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:03.235 05:07:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:03.235 [2024-11-18 05:07:26.659468] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:27:03.235 05:07:26 -- common/autotest_common.sh@653 -- # es=22 00:27:03.235 05:07:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.235 05:07:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.235 05:07:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.235 00:27:03.235 real 0m0.109s 00:27:03.235 user 0m0.059s 00:27:03.235 sys 0m0.050s 00:27:03.235 05:07:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.235 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.235 ************************************ 00:27:03.235 END TEST dd_double_input 00:27:03.235 ************************************ 00:27:03.235 05:07:26 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:27:03.235 05:07:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:03.235 05:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.235 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.494 ************************************ 00:27:03.494 START TEST dd_double_output 00:27:03.494 ************************************ 00:27:03.494 05:07:26 -- common/autotest_common.sh@1114 -- # double_output 00:27:03.494 05:07:26 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:03.494 05:07:26 -- common/autotest_common.sh@650 -- # local es=0 00:27:03.494 05:07:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:03.494 05:07:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.494 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.494 05:07:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.494 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.494 05:07:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.494 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.494 05:07:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.494 05:07:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:03.494 05:07:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:03.494 [2024-11-18 05:07:26.820748] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:27:03.494 05:07:26 -- common/autotest_common.sh@653 -- # es=22 00:27:03.494 05:07:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.494 05:07:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.494 05:07:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.494 00:27:03.494 real 0m0.114s 00:27:03.494 user 0m0.067s 00:27:03.494 sys 0m0.047s 00:27:03.494 05:07:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.494 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.494 ************************************ 00:27:03.494 END TEST dd_double_output 00:27:03.494 ************************************ 00:27:03.494 05:07:26 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:27:03.494 05:07:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:03.494 05:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.494 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:27:03.494 ************************************ 00:27:03.494 START TEST dd_no_input 00:27:03.494 ************************************ 00:27:03.494 05:07:26 -- common/autotest_common.sh@1114 -- # no_input 00:27:03.494 05:07:26 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:03.494 05:07:26 -- common/autotest_common.sh@650 -- # local es=0 00:27:03.494 05:07:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:03.494 05:07:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.494 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.494 05:07:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.494 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.495 05:07:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.495 05:07:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.495 05:07:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.495 05:07:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:03.495 05:07:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:03.495 [2024-11-18 05:07:26.988048] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:27:03.754 05:07:27 -- common/autotest_common.sh@653 -- # es=22 00:27:03.754 05:07:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.754 05:07:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.754 05:07:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.754 00:27:03.754 real 0m0.115s 00:27:03.754 user 0m0.068s 00:27:03.754 sys 0m0.047s 00:27:03.754 05:07:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.754 05:07:27 -- common/autotest_common.sh@10 -- # set +x 00:27:03.754 ************************************ 00:27:03.754 END TEST dd_no_input 00:27:03.754 ************************************ 00:27:03.754 05:07:27 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:27:03.754 05:07:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:03.754 05:07:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.754 05:07:27 -- common/autotest_common.sh@10 -- # set +x 00:27:03.754 ************************************ 00:27:03.754 START TEST dd_no_output 00:27:03.754 ************************************ 00:27:03.754 05:07:27 -- common/autotest_common.sh@1114 -- # no_output 00:27:03.754 05:07:27 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:03.754 05:07:27 -- common/autotest_common.sh@650 -- # local es=0 00:27:03.754 05:07:27 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:03.754 05:07:27 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.754 05:07:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.754 05:07:27 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.754 05:07:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.754 05:07:27 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.754 05:07:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.754 05:07:27 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.754 05:07:27 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:03.754 05:07:27 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:03.754 [2024-11-18 05:07:27.159033] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:27:03.754 05:07:27 -- common/autotest_common.sh@653 -- # es=22 00:27:03.754 05:07:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.754 05:07:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.754 05:07:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.754 00:27:03.754 real 0m0.118s 00:27:03.754 user 0m0.065s 00:27:03.754 sys 0m0.054s 00:27:03.754 05:07:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.754 05:07:27 -- common/autotest_common.sh@10 -- # set +x 00:27:03.754 ************************************ 00:27:03.754 END TEST dd_no_output 00:27:03.754 ************************************ 00:27:03.754 05:07:27 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:27:03.754 05:07:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:03.754 05:07:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.754 05:07:27 -- common/autotest_common.sh@10 -- # set +x 00:27:03.754 ************************************ 00:27:03.754 START TEST dd_wrong_blocksize 00:27:03.754 ************************************ 00:27:03.754 05:07:27 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:27:03.754 05:07:27 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:03.754 05:07:27 -- common/autotest_common.sh@650 -- # local es=0 00:27:03.754 05:07:27 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:03.754 05:07:27 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.754 05:07:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.754 05:07:27 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.754 05:07:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.754 05:07:27 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.754 05:07:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.754 05:07:27 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.754 05:07:27 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:03.754 05:07:27 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:04.012 [2024-11-18 05:07:27.332759] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:27:04.012 05:07:27 -- common/autotest_common.sh@653 -- # es=22 00:27:04.012 05:07:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:04.012 05:07:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:04.012 05:07:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:04.012 00:27:04.012 real 0m0.120s 00:27:04.012 user 0m0.062s 00:27:04.012 sys 0m0.058s 00:27:04.012 05:07:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:04.012 05:07:27 -- common/autotest_common.sh@10 -- # set +x 00:27:04.012 ************************************ 00:27:04.012 END TEST dd_wrong_blocksize 00:27:04.012 ************************************ 00:27:04.012 05:07:27 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:27:04.012 05:07:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:04.012 05:07:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:04.012 05:07:27 -- common/autotest_common.sh@10 -- # set +x 00:27:04.012 ************************************ 00:27:04.012 START TEST dd_smaller_blocksize 00:27:04.012 ************************************ 00:27:04.012 05:07:27 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:27:04.012 05:07:27 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:04.012 05:07:27 -- common/autotest_common.sh@650 -- # local es=0 00:27:04.012 05:07:27 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:04.012 05:07:27 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.012 05:07:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:04.012 05:07:27 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.012 05:07:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:04.012 05:07:27 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.012 05:07:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:04.013 05:07:27 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.013 05:07:27 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:04.013 05:07:27 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:04.013 [2024-11-18 05:07:27.512319] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:04.013 [2024-11-18 05:07:27.512488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90596 ] 00:27:04.271 [2024-11-18 05:07:27.686817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.529 [2024-11-18 05:07:27.923136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.096 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:27:05.097 [2024-11-18 05:07:28.408669] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:27:05.097 [2024-11-18 05:07:28.408721] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:05.664 [2024-11-18 05:07:28.960919] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:05.922 05:07:29 -- common/autotest_common.sh@653 -- # es=244 00:27:05.922 05:07:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:05.922 05:07:29 -- common/autotest_common.sh@662 -- # es=116 00:27:05.922 05:07:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:05.922 05:07:29 -- common/autotest_common.sh@670 -- # es=1 00:27:05.922 05:07:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:05.922 00:27:05.922 real 0m1.873s 00:27:05.922 user 0m1.374s 00:27:05.922 sys 0m0.398s 00:27:05.922 05:07:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:05.922 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:05.922 ************************************ 00:27:05.922 END TEST dd_smaller_blocksize 00:27:05.922 ************************************ 00:27:05.922 05:07:29 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:27:05.922 05:07:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:05.922 05:07:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:05.922 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:05.922 ************************************ 00:27:05.922 START TEST dd_invalid_count 00:27:05.922 ************************************ 00:27:05.922 05:07:29 -- common/autotest_common.sh@1114 -- # invalid_count 00:27:05.922 05:07:29 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:05.922 05:07:29 -- common/autotest_common.sh@650 -- # local es=0 00:27:05.922 05:07:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:05.922 05:07:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:05.922 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:05.922 05:07:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:05.922 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:05.922 05:07:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:05.922 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:05.922 05:07:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:05.922 05:07:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:05.922 05:07:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:05.922 [2024-11-18 05:07:29.428366] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:27:06.181 05:07:29 -- common/autotest_common.sh@653 -- # es=22 00:27:06.181 05:07:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:06.181 05:07:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:06.181 05:07:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:06.181 00:27:06.181 real 0m0.118s 00:27:06.181 user 0m0.065s 00:27:06.181 sys 0m0.054s 00:27:06.181 05:07:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:06.181 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:06.181 ************************************ 00:27:06.181 END TEST dd_invalid_count 00:27:06.181 ************************************ 00:27:06.181 05:07:29 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:27:06.181 05:07:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:06.181 05:07:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:06.181 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:06.181 ************************************ 00:27:06.181 START TEST dd_invalid_oflag 00:27:06.181 ************************************ 00:27:06.181 05:07:29 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:27:06.181 05:07:29 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:06.181 05:07:29 -- common/autotest_common.sh@650 -- # local es=0 00:27:06.181 05:07:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:06.181 05:07:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.181 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.181 05:07:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.181 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.181 05:07:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.181 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.181 05:07:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.181 05:07:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:06.181 05:07:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:06.181 [2024-11-18 05:07:29.592031] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:27:06.181 05:07:29 -- common/autotest_common.sh@653 -- # es=22 00:27:06.181 05:07:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:06.181 05:07:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:06.181 05:07:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:06.181 00:27:06.181 real 0m0.094s 00:27:06.181 user 0m0.055s 00:27:06.181 sys 0m0.040s 00:27:06.181 05:07:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:06.181 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:06.181 ************************************ 00:27:06.181 END TEST dd_invalid_oflag 00:27:06.181 ************************************ 00:27:06.181 05:07:29 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:27:06.181 05:07:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:06.181 05:07:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:06.181 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:06.181 ************************************ 00:27:06.181 START TEST dd_invalid_iflag 00:27:06.181 ************************************ 00:27:06.181 05:07:29 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:27:06.181 05:07:29 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:06.181 05:07:29 -- common/autotest_common.sh@650 -- # local es=0 00:27:06.181 05:07:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:06.181 05:07:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.181 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.181 05:07:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.181 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.181 05:07:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.181 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.181 05:07:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.181 05:07:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:06.181 05:07:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:06.442 [2024-11-18 05:07:29.736052] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:27:06.442 05:07:29 -- common/autotest_common.sh@653 -- # es=22 00:27:06.442 05:07:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:06.442 05:07:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:06.442 05:07:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:06.442 00:27:06.442 real 0m0.095s 00:27:06.442 user 0m0.052s 00:27:06.442 sys 0m0.043s 00:27:06.442 05:07:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:06.442 ************************************ 00:27:06.442 END TEST dd_invalid_iflag 00:27:06.442 ************************************ 00:27:06.442 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:06.442 05:07:29 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:27:06.442 05:07:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:06.442 05:07:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:06.442 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:27:06.442 ************************************ 00:27:06.442 START TEST dd_unknown_flag 00:27:06.442 ************************************ 00:27:06.442 05:07:29 -- common/autotest_common.sh@1114 -- # unknown_flag 00:27:06.442 05:07:29 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:06.442 05:07:29 -- common/autotest_common.sh@650 -- # local es=0 00:27:06.442 05:07:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:06.442 05:07:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.442 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.442 05:07:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.442 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.442 05:07:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.442 05:07:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.442 05:07:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.442 05:07:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:06.442 05:07:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:06.442 [2024-11-18 05:07:29.886310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:06.442 [2024-11-18 05:07:29.886477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90703 ] 00:27:06.709 [2024-11-18 05:07:30.039657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.709 [2024-11-18 05:07:30.189791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.977 [2024-11-18 05:07:30.423054] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:27:06.977 [2024-11-18 05:07:30.423145] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:06.977 [2024-11-18 05:07:30.423162] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:06.977 [2024-11-18 05:07:30.423179] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:07.551 [2024-11-18 05:07:30.978041] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:07.810 05:07:31 -- common/autotest_common.sh@653 -- # es=236 00:27:07.810 05:07:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:07.810 05:07:31 -- common/autotest_common.sh@662 -- # es=108 00:27:07.810 05:07:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:07.810 05:07:31 -- common/autotest_common.sh@670 -- # es=1 00:27:07.810 05:07:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:07.810 00:27:07.810 real 0m1.489s 00:27:07.810 user 0m1.200s 00:27:07.810 sys 0m0.187s 00:27:07.810 05:07:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:07.810 05:07:31 -- common/autotest_common.sh@10 -- # set +x 00:27:07.810 ************************************ 00:27:07.810 END TEST dd_unknown_flag 00:27:07.810 ************************************ 00:27:08.069 05:07:31 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:27:08.069 05:07:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:08.069 05:07:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:08.069 05:07:31 -- common/autotest_common.sh@10 -- # set +x 00:27:08.069 ************************************ 00:27:08.069 START TEST dd_invalid_json 00:27:08.069 ************************************ 00:27:08.069 05:07:31 -- common/autotest_common.sh@1114 -- # invalid_json 00:27:08.069 05:07:31 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:08.069 05:07:31 -- common/autotest_common.sh@650 -- # local es=0 00:27:08.069 05:07:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:08.069 05:07:31 -- dd/negative_dd.sh@95 -- # : 00:27:08.069 05:07:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.069 05:07:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:08.069 05:07:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.069 05:07:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:08.069 05:07:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.069 05:07:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:08.069 05:07:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.069 05:07:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:08.069 05:07:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:08.069 [2024-11-18 05:07:31.436404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:08.069 [2024-11-18 05:07:31.436550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90737 ] 00:27:08.329 [2024-11-18 05:07:31.602827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.329 [2024-11-18 05:07:31.756619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.329 [2024-11-18 05:07:31.756866] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:27:08.329 [2024-11-18 05:07:31.756899] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:08.329 [2024-11-18 05:07:31.756959] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:08.588 05:07:32 -- common/autotest_common.sh@653 -- # es=234 00:27:08.588 05:07:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:08.588 05:07:32 -- common/autotest_common.sh@662 -- # es=106 00:27:08.588 05:07:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:08.588 05:07:32 -- common/autotest_common.sh@670 -- # es=1 00:27:08.588 ************************************ 00:27:08.588 END TEST dd_invalid_json 00:27:08.588 ************************************ 00:27:08.588 05:07:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:08.588 00:27:08.588 real 0m0.725s 00:27:08.588 user 0m0.509s 00:27:08.588 sys 0m0.117s 00:27:08.588 05:07:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:08.588 05:07:32 -- common/autotest_common.sh@10 -- # set +x 00:27:08.847 ************************************ 00:27:08.847 END TEST spdk_dd_negative 00:27:08.847 ************************************ 00:27:08.847 00:27:08.847 real 0m5.882s 00:27:08.847 user 0m3.940s 00:27:08.847 sys 0m1.607s 00:27:08.847 05:07:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:08.847 05:07:32 -- common/autotest_common.sh@10 -- # set +x 00:27:08.847 ************************************ 00:27:08.847 END TEST spdk_dd 00:27:08.847 ************************************ 00:27:08.847 00:27:08.847 real 2m10.483s 00:27:08.847 user 1m41.999s 00:27:08.847 sys 0m18.485s 00:27:08.847 05:07:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:08.847 05:07:32 -- common/autotest_common.sh@10 -- # set +x 00:27:08.847 05:07:32 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:27:08.847 05:07:32 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:08.847 05:07:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:08.847 05:07:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:08.847 05:07:32 -- common/autotest_common.sh@10 -- # set +x 00:27:08.847 ************************************ 00:27:08.847 START TEST blockdev_nvme 00:27:08.847 ************************************ 00:27:08.848 05:07:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:08.848 * Looking for test storage... 00:27:08.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:08.848 05:07:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:08.848 05:07:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:08.848 05:07:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:09.108 05:07:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:09.108 05:07:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:09.108 05:07:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:09.108 05:07:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:09.108 05:07:32 -- scripts/common.sh@335 -- # IFS=.-: 00:27:09.108 05:07:32 -- scripts/common.sh@335 -- # read -ra ver1 00:27:09.108 05:07:32 -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.108 05:07:32 -- scripts/common.sh@336 -- # read -ra ver2 00:27:09.108 05:07:32 -- scripts/common.sh@337 -- # local 'op=<' 00:27:09.108 05:07:32 -- scripts/common.sh@339 -- # ver1_l=2 00:27:09.108 05:07:32 -- scripts/common.sh@340 -- # ver2_l=1 00:27:09.108 05:07:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:09.108 05:07:32 -- scripts/common.sh@343 -- # case "$op" in 00:27:09.108 05:07:32 -- scripts/common.sh@344 -- # : 1 00:27:09.108 05:07:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:09.108 05:07:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.108 05:07:32 -- scripts/common.sh@364 -- # decimal 1 00:27:09.108 05:07:32 -- scripts/common.sh@352 -- # local d=1 00:27:09.108 05:07:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.108 05:07:32 -- scripts/common.sh@354 -- # echo 1 00:27:09.108 05:07:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:09.108 05:07:32 -- scripts/common.sh@365 -- # decimal 2 00:27:09.108 05:07:32 -- scripts/common.sh@352 -- # local d=2 00:27:09.108 05:07:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.108 05:07:32 -- scripts/common.sh@354 -- # echo 2 00:27:09.108 05:07:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:09.108 05:07:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:09.108 05:07:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:09.108 05:07:32 -- scripts/common.sh@367 -- # return 0 00:27:09.108 05:07:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.108 05:07:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:09.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.108 --rc genhtml_branch_coverage=1 00:27:09.108 --rc genhtml_function_coverage=1 00:27:09.108 --rc genhtml_legend=1 00:27:09.108 --rc geninfo_all_blocks=1 00:27:09.108 --rc geninfo_unexecuted_blocks=1 00:27:09.108 00:27:09.108 ' 00:27:09.108 05:07:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:09.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.108 --rc genhtml_branch_coverage=1 00:27:09.108 --rc genhtml_function_coverage=1 00:27:09.108 --rc genhtml_legend=1 00:27:09.108 --rc geninfo_all_blocks=1 00:27:09.108 --rc geninfo_unexecuted_blocks=1 00:27:09.108 00:27:09.108 ' 00:27:09.108 05:07:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:09.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.108 --rc genhtml_branch_coverage=1 00:27:09.108 --rc genhtml_function_coverage=1 00:27:09.108 --rc genhtml_legend=1 00:27:09.108 --rc geninfo_all_blocks=1 00:27:09.108 --rc geninfo_unexecuted_blocks=1 00:27:09.108 00:27:09.108 ' 00:27:09.108 05:07:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:09.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.108 --rc genhtml_branch_coverage=1 00:27:09.108 --rc genhtml_function_coverage=1 00:27:09.108 --rc genhtml_legend=1 00:27:09.108 --rc geninfo_all_blocks=1 00:27:09.108 --rc geninfo_unexecuted_blocks=1 00:27:09.108 00:27:09.108 ' 00:27:09.108 05:07:32 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:09.108 05:07:32 -- bdev/nbd_common.sh@6 -- # set -e 00:27:09.108 05:07:32 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:09.108 05:07:32 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:09.108 05:07:32 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:09.108 05:07:32 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:09.108 05:07:32 -- bdev/blockdev.sh@18 -- # : 00:27:09.108 05:07:32 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:09.108 05:07:32 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:09.108 05:07:32 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:09.108 05:07:32 -- bdev/blockdev.sh@672 -- # uname -s 00:27:09.108 05:07:32 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:09.108 05:07:32 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:09.108 05:07:32 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:27:09.108 05:07:32 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:09.108 05:07:32 -- bdev/blockdev.sh@682 -- # dek= 00:27:09.108 05:07:32 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:09.108 05:07:32 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:09.108 05:07:32 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:09.108 05:07:32 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:27:09.108 05:07:32 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:27:09.108 05:07:32 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:09.108 05:07:32 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=90832 00:27:09.108 05:07:32 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:09.108 05:07:32 -- bdev/blockdev.sh@47 -- # waitforlisten 90832 00:27:09.108 05:07:32 -- common/autotest_common.sh@829 -- # '[' -z 90832 ']' 00:27:09.108 05:07:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.108 05:07:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:09.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.108 05:07:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.108 05:07:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:09.108 05:07:32 -- common/autotest_common.sh@10 -- # set +x 00:27:09.108 05:07:32 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:09.108 [2024-11-18 05:07:32.481087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:09.108 [2024-11-18 05:07:32.481269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90832 ] 00:27:09.368 [2024-11-18 05:07:32.648271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.368 [2024-11-18 05:07:32.800617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:09.368 [2024-11-18 05:07:32.800805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.936 05:07:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.936 05:07:33 -- common/autotest_common.sh@862 -- # return 0 00:27:09.936 05:07:33 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:09.936 05:07:33 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:27:09.936 05:07:33 -- bdev/blockdev.sh@79 -- # local json 00:27:09.936 05:07:33 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:09.936 05:07:33 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:10.196 05:07:33 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:10.196 05:07:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.196 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.196 05:07:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.196 05:07:33 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:10.196 05:07:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.196 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.196 05:07:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.196 05:07:33 -- bdev/blockdev.sh@738 -- # cat 00:27:10.196 05:07:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:10.196 05:07:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.196 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.196 05:07:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.196 05:07:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:10.196 05:07:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.196 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.196 05:07:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.196 05:07:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:10.196 05:07:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.196 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.196 05:07:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.196 05:07:33 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:10.196 05:07:33 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:10.196 05:07:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.196 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.196 05:07:33 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:10.196 05:07:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.196 05:07:33 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:10.196 05:07:33 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "015c496b-6d3d-494a-ac96-6fe2b2e7cda5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "015c496b-6d3d-494a-ac96-6fe2b2e7cda5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:27:10.196 05:07:33 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:10.196 05:07:33 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:10.196 05:07:33 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:27:10.196 05:07:33 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:10.196 05:07:33 -- bdev/blockdev.sh@752 -- # killprocess 90832 00:27:10.196 05:07:33 -- common/autotest_common.sh@936 -- # '[' -z 90832 ']' 00:27:10.196 05:07:33 -- common/autotest_common.sh@940 -- # kill -0 90832 00:27:10.196 05:07:33 -- common/autotest_common.sh@941 -- # uname 00:27:10.196 05:07:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:10.196 05:07:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90832 00:27:10.196 05:07:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:10.196 05:07:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:10.196 killing process with pid 90832 00:27:10.196 05:07:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90832' 00:27:10.196 05:07:33 -- common/autotest_common.sh@955 -- # kill 90832 00:27:10.196 05:07:33 -- common/autotest_common.sh@960 -- # wait 90832 00:27:12.102 05:07:35 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:12.102 05:07:35 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:12.102 05:07:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:27:12.102 05:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:12.102 05:07:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.102 ************************************ 00:27:12.102 START TEST bdev_hello_world 00:27:12.102 ************************************ 00:27:12.102 05:07:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:12.102 [2024-11-18 05:07:35.428826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:12.102 [2024-11-18 05:07:35.429011] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90905 ] 00:27:12.102 [2024-11-18 05:07:35.597980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.362 [2024-11-18 05:07:35.746909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.621 [2024-11-18 05:07:36.085375] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:12.621 [2024-11-18 05:07:36.085458] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:27:12.621 [2024-11-18 05:07:36.085482] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:12.621 [2024-11-18 05:07:36.088003] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:12.621 [2024-11-18 05:07:36.088550] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:12.621 [2024-11-18 05:07:36.088606] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:12.621 [2024-11-18 05:07:36.088884] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:12.621 00:27:12.621 [2024-11-18 05:07:36.088925] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:13.559 00:27:13.559 real 0m1.625s 00:27:13.559 user 0m1.345s 00:27:13.559 sys 0m0.180s 00:27:13.559 ************************************ 00:27:13.559 05:07:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:13.559 05:07:36 -- common/autotest_common.sh@10 -- # set +x 00:27:13.559 END TEST bdev_hello_world 00:27:13.559 ************************************ 00:27:13.559 05:07:37 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:13.559 05:07:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:13.559 05:07:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:13.559 05:07:37 -- common/autotest_common.sh@10 -- # set +x 00:27:13.559 ************************************ 00:27:13.559 START TEST bdev_bounds 00:27:13.559 ************************************ 00:27:13.559 05:07:37 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:27:13.559 05:07:37 -- bdev/blockdev.sh@288 -- # bdevio_pid=90943 00:27:13.559 05:07:37 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:13.559 05:07:37 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 90943' 00:27:13.559 Process bdevio pid: 90943 00:27:13.559 05:07:37 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:13.559 05:07:37 -- bdev/blockdev.sh@291 -- # waitforlisten 90943 00:27:13.559 05:07:37 -- common/autotest_common.sh@829 -- # '[' -z 90943 ']' 00:27:13.559 05:07:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.559 05:07:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.559 05:07:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.559 05:07:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.560 05:07:37 -- common/autotest_common.sh@10 -- # set +x 00:27:13.819 [2024-11-18 05:07:37.106969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:13.819 [2024-11-18 05:07:37.107607] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90943 ] 00:27:13.819 [2024-11-18 05:07:37.277411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:14.078 [2024-11-18 05:07:37.433663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.078 [2024-11-18 05:07:37.433787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.078 [2024-11-18 05:07:37.433791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:14.645 05:07:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.645 05:07:38 -- common/autotest_common.sh@862 -- # return 0 00:27:14.645 05:07:38 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:14.645 I/O targets: 00:27:14.645 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:27:14.645 00:27:14.645 00:27:14.645 CUnit - A unit testing framework for C - Version 2.1-3 00:27:14.645 http://cunit.sourceforge.net/ 00:27:14.645 00:27:14.645 00:27:14.645 Suite: bdevio tests on: Nvme0n1 00:27:14.645 Test: blockdev write read block ...passed 00:27:14.645 Test: blockdev write zeroes read block ...passed 00:27:14.904 Test: blockdev write zeroes read no split ...passed 00:27:14.904 Test: blockdev write zeroes read split ...passed 00:27:14.904 Test: blockdev write zeroes read split partial ...passed 00:27:14.904 Test: blockdev reset ...[2024-11-18 05:07:38.214927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:14.904 [2024-11-18 05:07:38.218144] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:14.904 passed 00:27:14.904 Test: blockdev write read 8 blocks ...passed 00:27:14.904 Test: blockdev write read size > 128k ...passed 00:27:14.904 Test: blockdev write read invalid size ...passed 00:27:14.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:14.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:14.904 Test: blockdev write read max offset ...passed 00:27:14.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:14.904 Test: blockdev writev readv 8 blocks ...passed 00:27:14.904 Test: blockdev writev readv 30 x 1block ...passed 00:27:14.905 Test: blockdev writev readv block ...passed 00:27:14.905 Test: blockdev writev readv size > 128k ...passed 00:27:14.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:14.905 Test: blockdev comparev and writev ...[2024-11-18 05:07:38.227913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28d40d000 len:0x1000 00:27:14.905 [2024-11-18 05:07:38.227988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:14.905 passed 00:27:14.905 Test: blockdev nvme passthru rw ...passed 00:27:14.905 Test: blockdev nvme passthru vendor specific ...[2024-11-18 05:07:38.229152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:27:14.905 [2024-11-18 05:07:38.229485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:27:14.905 passed 00:27:14.905 Test: blockdev nvme admin passthru ...passed 00:27:14.905 Test: blockdev copy ...passed 00:27:14.905 00:27:14.905 Run Summary: Type Total Ran Passed Failed Inactive 00:27:14.905 suites 1 1 n/a 0 0 00:27:14.905 tests 23 23 23 0 0 00:27:14.905 asserts 152 152 152 0 n/a 00:27:14.905 00:27:14.905 Elapsed time = 0.195 seconds 00:27:14.905 0 00:27:14.905 05:07:38 -- bdev/blockdev.sh@293 -- # killprocess 90943 00:27:14.905 05:07:38 -- common/autotest_common.sh@936 -- # '[' -z 90943 ']' 00:27:14.905 05:07:38 -- common/autotest_common.sh@940 -- # kill -0 90943 00:27:14.905 05:07:38 -- common/autotest_common.sh@941 -- # uname 00:27:14.905 05:07:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:14.905 05:07:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90943 00:27:14.905 05:07:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:14.905 killing process with pid 90943 00:27:14.905 05:07:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:14.905 05:07:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90943' 00:27:14.905 05:07:38 -- common/autotest_common.sh@955 -- # kill 90943 00:27:14.905 05:07:38 -- common/autotest_common.sh@960 -- # wait 90943 00:27:15.842 05:07:39 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:15.842 00:27:15.842 real 0m2.108s 00:27:15.842 user 0m5.042s 00:27:15.842 sys 0m0.308s 00:27:15.842 05:07:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:15.842 ************************************ 00:27:15.842 END TEST bdev_bounds 00:27:15.842 ************************************ 00:27:15.842 05:07:39 -- common/autotest_common.sh@10 -- # set +x 00:27:15.842 05:07:39 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:15.842 05:07:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:15.842 05:07:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:15.842 05:07:39 -- common/autotest_common.sh@10 -- # set +x 00:27:15.842 ************************************ 00:27:15.842 START TEST bdev_nbd 00:27:15.842 ************************************ 00:27:15.842 05:07:39 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:15.842 05:07:39 -- bdev/blockdev.sh@298 -- # uname -s 00:27:15.842 05:07:39 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:15.842 05:07:39 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.842 05:07:39 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:15.842 05:07:39 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:27:15.842 05:07:39 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:15.842 05:07:39 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:27:15.842 05:07:39 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:15.842 05:07:39 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:15.842 05:07:39 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:15.842 05:07:39 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:27:15.842 05:07:39 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:27:15.842 05:07:39 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:15.842 05:07:39 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:27:15.842 05:07:39 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:15.842 05:07:39 -- bdev/blockdev.sh@316 -- # nbd_pid=90996 00:27:15.842 05:07:39 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:15.842 05:07:39 -- bdev/blockdev.sh@318 -- # waitforlisten 90996 /var/tmp/spdk-nbd.sock 00:27:15.842 05:07:39 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:15.842 05:07:39 -- common/autotest_common.sh@829 -- # '[' -z 90996 ']' 00:27:15.842 05:07:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:15.842 05:07:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:15.842 05:07:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:15.842 05:07:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.842 05:07:39 -- common/autotest_common.sh@10 -- # set +x 00:27:15.842 [2024-11-18 05:07:39.254847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:15.842 [2024-11-18 05:07:39.254989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.102 [2024-11-18 05:07:39.401697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.102 [2024-11-18 05:07:39.555448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.670 05:07:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:16.670 05:07:40 -- common/autotest_common.sh@862 -- # return 0 00:27:16.670 05:07:40 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@24 -- # local i 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:16.670 05:07:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:27:16.929 05:07:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:16.929 05:07:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:16.929 05:07:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:16.929 05:07:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:16.929 05:07:40 -- common/autotest_common.sh@867 -- # local i 00:27:16.929 05:07:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:16.929 05:07:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:16.929 05:07:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:16.929 05:07:40 -- common/autotest_common.sh@871 -- # break 00:27:16.929 05:07:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:16.929 05:07:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:16.929 05:07:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:16.929 1+0 records in 00:27:16.929 1+0 records out 00:27:16.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441407 s, 9.3 MB/s 00:27:16.930 05:07:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:16.930 05:07:40 -- common/autotest_common.sh@884 -- # size=4096 00:27:16.930 05:07:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:16.930 05:07:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:16.930 05:07:40 -- common/autotest_common.sh@887 -- # return 0 00:27:16.930 05:07:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:16.930 05:07:40 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:16.930 05:07:40 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:17.189 { 00:27:17.189 "nbd_device": "/dev/nbd0", 00:27:17.189 "bdev_name": "Nvme0n1" 00:27:17.189 } 00:27:17.189 ]' 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:17.189 { 00:27:17.189 "nbd_device": "/dev/nbd0", 00:27:17.189 "bdev_name": "Nvme0n1" 00:27:17.189 } 00:27:17.189 ]' 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@51 -- # local i 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:17.189 05:07:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@41 -- # break 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@45 -- # return 0 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:17.448 05:07:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@65 -- # true 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@65 -- # count=0 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@122 -- # count=0 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@127 -- # return 0 00:27:17.715 05:07:41 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@12 -- # local i 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:17.715 05:07:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:27:17.976 /dev/nbd0 00:27:17.976 05:07:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:17.976 05:07:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:17.976 05:07:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:17.976 05:07:41 -- common/autotest_common.sh@867 -- # local i 00:27:17.976 05:07:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:17.976 05:07:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:17.976 05:07:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:17.976 05:07:41 -- common/autotest_common.sh@871 -- # break 00:27:17.976 05:07:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:17.976 05:07:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:17.976 05:07:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:17.976 1+0 records in 00:27:17.976 1+0 records out 00:27:17.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481583 s, 8.5 MB/s 00:27:17.976 05:07:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:17.976 05:07:41 -- common/autotest_common.sh@884 -- # size=4096 00:27:17.976 05:07:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:17.976 05:07:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:17.976 05:07:41 -- common/autotest_common.sh@887 -- # return 0 00:27:17.976 05:07:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:17.976 05:07:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:17.976 05:07:41 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:17.976 05:07:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:17.976 05:07:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:18.235 { 00:27:18.235 "nbd_device": "/dev/nbd0", 00:27:18.235 "bdev_name": "Nvme0n1" 00:27:18.235 } 00:27:18.235 ]' 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:18.235 { 00:27:18.235 "nbd_device": "/dev/nbd0", 00:27:18.235 "bdev_name": "Nvme0n1" 00:27:18.235 } 00:27:18.235 ]' 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@65 -- # count=1 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@66 -- # echo 1 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@95 -- # count=1 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:18.235 256+0 records in 00:27:18.235 256+0 records out 00:27:18.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00558218 s, 188 MB/s 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:18.235 256+0 records in 00:27:18.235 256+0 records out 00:27:18.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0728149 s, 14.4 MB/s 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@51 -- # local i 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:18.235 05:07:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@41 -- # break 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@45 -- # return 0 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:18.494 05:07:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@65 -- # true 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@65 -- # count=0 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@104 -- # count=0 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@109 -- # return 0 00:27:18.754 05:07:42 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:18.754 05:07:42 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:19.012 malloc_lvol_verify 00:27:19.012 05:07:42 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:19.271 b6dd02ad-0d03-4bfd-ba23-84d17bdf4490 00:27:19.271 05:07:42 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:19.531 b526338b-fd84-4fc3-bbf7-84320db44ef7 00:27:19.531 05:07:42 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:19.531 /dev/nbd0 00:27:19.790 05:07:43 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:19.790 mke2fs 1.47.0 (5-Feb-2023) 00:27:19.790 00:27:19.790 Filesystem too small for a journal 00:27:19.790 Discarding device blocks: 0/1024 done 00:27:19.790 Creating filesystem with 1024 4k blocks and 1024 inodes 00:27:19.790 00:27:19.790 Allocating group tables: 0/1 done 00:27:19.790 Writing inode tables: 0/1 done 00:27:19.790 Writing superblocks and filesystem accounting information: 0/1 done 00:27:19.790 00:27:19.790 05:07:43 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:19.790 05:07:43 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:19.790 05:07:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:19.790 05:07:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:19.790 05:07:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:19.790 05:07:43 -- bdev/nbd_common.sh@51 -- # local i 00:27:19.790 05:07:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:19.790 05:07:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:19.790 05:07:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:20.049 05:07:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:20.049 05:07:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:20.049 05:07:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:20.049 05:07:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:20.049 05:07:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:20.049 05:07:43 -- bdev/nbd_common.sh@41 -- # break 00:27:20.049 05:07:43 -- bdev/nbd_common.sh@45 -- # return 0 00:27:20.049 05:07:43 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:20.049 05:07:43 -- bdev/nbd_common.sh@147 -- # return 0 00:27:20.049 05:07:43 -- bdev/blockdev.sh@324 -- # killprocess 90996 00:27:20.049 05:07:43 -- common/autotest_common.sh@936 -- # '[' -z 90996 ']' 00:27:20.049 05:07:43 -- common/autotest_common.sh@940 -- # kill -0 90996 00:27:20.049 05:07:43 -- common/autotest_common.sh@941 -- # uname 00:27:20.049 05:07:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:20.049 05:07:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90996 00:27:20.049 05:07:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:20.049 05:07:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:20.049 killing process with pid 90996 00:27:20.049 05:07:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90996' 00:27:20.049 05:07:43 -- common/autotest_common.sh@955 -- # kill 90996 00:27:20.049 05:07:43 -- common/autotest_common.sh@960 -- # wait 90996 00:27:20.988 05:07:44 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:27:20.988 00:27:20.988 real 0m5.119s 00:27:20.988 user 0m7.436s 00:27:20.988 sys 0m1.029s 00:27:20.988 05:07:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:20.988 ************************************ 00:27:20.988 END TEST bdev_nbd 00:27:20.988 ************************************ 00:27:20.988 05:07:44 -- common/autotest_common.sh@10 -- # set +x 00:27:20.988 05:07:44 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:27:20.988 05:07:44 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:27:20.988 skipping fio tests on NVMe due to multi-ns failures. 00:27:20.988 05:07:44 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:27:20.988 05:07:44 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:20.988 05:07:44 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:20.988 05:07:44 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:27:20.988 05:07:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:20.988 05:07:44 -- common/autotest_common.sh@10 -- # set +x 00:27:20.988 ************************************ 00:27:20.988 START TEST bdev_verify 00:27:20.988 ************************************ 00:27:20.988 05:07:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:20.988 [2024-11-18 05:07:44.440584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:20.988 [2024-11-18 05:07:44.440751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91173 ] 00:27:21.247 [2024-11-18 05:07:44.607324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:21.247 [2024-11-18 05:07:44.762449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.247 [2024-11-18 05:07:44.762470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.815 Running I/O for 5 seconds... 00:27:27.088 00:27:27.088 Latency(us) 00:27:27.088 [2024-11-18T05:07:50.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.088 [2024-11-18T05:07:50.612Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:27.088 Verification LBA range: start 0x0 length 0xa0000 00:27:27.088 Nvme0n1 : 5.01 17567.59 68.62 0.00 0.00 7252.97 741.00 15847.80 00:27:27.088 [2024-11-18T05:07:50.612Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:27.088 Verification LBA range: start 0xa0000 length 0xa0000 00:27:27.088 Nvme0n1 : 5.01 17535.37 68.50 0.00 0.00 7266.88 411.46 14477.50 00:27:27.088 [2024-11-18T05:07:50.612Z] =================================================================================================================== 00:27:27.088 [2024-11-18T05:07:50.612Z] Total : 35102.95 137.12 0.00 0.00 7259.92 411.46 15847.80 00:27:35.210 00:27:35.210 real 0m14.082s 00:27:35.210 user 0m27.039s 00:27:35.210 sys 0m0.295s 00:27:35.210 05:07:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:35.210 05:07:58 -- common/autotest_common.sh@10 -- # set +x 00:27:35.210 ************************************ 00:27:35.210 END TEST bdev_verify 00:27:35.210 ************************************ 00:27:35.210 05:07:58 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:35.210 05:07:58 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:27:35.210 05:07:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:35.210 05:07:58 -- common/autotest_common.sh@10 -- # set +x 00:27:35.210 ************************************ 00:27:35.210 START TEST bdev_verify_big_io 00:27:35.210 ************************************ 00:27:35.210 05:07:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:35.210 [2024-11-18 05:07:58.573053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:35.210 [2024-11-18 05:07:58.573229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91314 ] 00:27:35.473 [2024-11-18 05:07:58.742027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:35.473 [2024-11-18 05:07:58.897915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.473 [2024-11-18 05:07:58.897929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.041 Running I/O for 5 seconds... 00:27:41.315 00:27:41.315 Latency(us) 00:27:41.315 [2024-11-18T05:08:04.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.315 [2024-11-18T05:08:04.839Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:41.315 Verification LBA range: start 0x0 length 0xa000 00:27:41.315 Nvme0n1 : 5.03 1734.78 108.42 0.00 0.00 72716.87 448.70 115343.36 00:27:41.315 [2024-11-18T05:08:04.839Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:41.315 Verification LBA range: start 0xa000 length 0xa000 00:27:41.315 Nvme0n1 : 5.02 1762.28 110.14 0.00 0.00 71587.46 517.59 122016.12 00:27:41.315 [2024-11-18T05:08:04.839Z] =================================================================================================================== 00:27:41.315 [2024-11-18T05:08:04.839Z] Total : 3497.06 218.57 0.00 0.00 72147.96 448.70 122016.12 00:27:42.253 00:27:42.253 real 0m7.035s 00:27:42.253 user 0m13.003s 00:27:42.253 sys 0m0.224s 00:27:42.253 05:08:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:42.253 05:08:05 -- common/autotest_common.sh@10 -- # set +x 00:27:42.253 ************************************ 00:27:42.253 END TEST bdev_verify_big_io 00:27:42.253 ************************************ 00:27:42.253 05:08:05 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:42.253 05:08:05 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:27:42.253 05:08:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:42.253 05:08:05 -- common/autotest_common.sh@10 -- # set +x 00:27:42.253 ************************************ 00:27:42.253 START TEST bdev_write_zeroes 00:27:42.253 ************************************ 00:27:42.253 05:08:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:42.253 [2024-11-18 05:08:05.648334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:42.253 [2024-11-18 05:08:05.648446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91407 ] 00:27:42.514 [2024-11-18 05:08:05.799409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.514 [2024-11-18 05:08:05.946749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.134 Running I/O for 1 seconds... 00:27:44.068 00:27:44.068 Latency(us) 00:27:44.068 [2024-11-18T05:08:07.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.068 [2024-11-18T05:08:07.592Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:44.068 Nvme0n1 : 1.00 61570.51 240.51 0.00 0.00 2073.54 960.70 7060.01 00:27:44.068 [2024-11-18T05:08:07.592Z] =================================================================================================================== 00:27:44.068 [2024-11-18T05:08:07.592Z] Total : 61570.51 240.51 0.00 0.00 2073.54 960.70 7060.01 00:27:45.004 00:27:45.005 real 0m2.738s 00:27:45.005 user 0m2.464s 00:27:45.005 sys 0m0.174s 00:27:45.005 05:08:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:45.005 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:27:45.005 ************************************ 00:27:45.005 END TEST bdev_write_zeroes 00:27:45.005 ************************************ 00:27:45.005 05:08:08 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:45.005 05:08:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:27:45.005 05:08:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:45.005 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:27:45.005 ************************************ 00:27:45.005 START TEST bdev_json_nonenclosed 00:27:45.005 ************************************ 00:27:45.005 05:08:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:45.005 [2024-11-18 05:08:08.448301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:45.005 [2024-11-18 05:08:08.448477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91455 ] 00:27:45.263 [2024-11-18 05:08:08.616087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.263 [2024-11-18 05:08:08.772990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.263 [2024-11-18 05:08:08.773192] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:45.263 [2024-11-18 05:08:08.773230] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:45.845 00:27:45.845 real 0m0.714s 00:27:45.845 user 0m0.503s 00:27:45.845 sys 0m0.111s 00:27:45.845 05:08:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:45.845 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:45.845 ************************************ 00:27:45.845 END TEST bdev_json_nonenclosed 00:27:45.845 ************************************ 00:27:45.845 05:08:09 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:45.845 05:08:09 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:27:45.845 05:08:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:45.845 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:45.845 ************************************ 00:27:45.845 START TEST bdev_json_nonarray 00:27:45.845 ************************************ 00:27:45.845 05:08:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:45.845 [2024-11-18 05:08:09.200710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:45.845 [2024-11-18 05:08:09.200860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91481 ] 00:27:45.845 [2024-11-18 05:08:09.357987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.108 [2024-11-18 05:08:09.512512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.108 [2024-11-18 05:08:09.512727] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:46.108 [2024-11-18 05:08:09.512750] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:46.366 00:27:46.366 real 0m0.694s 00:27:46.366 user 0m0.484s 00:27:46.366 sys 0m0.110s 00:27:46.366 05:08:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:46.366 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:46.366 ************************************ 00:27:46.366 END TEST bdev_json_nonarray 00:27:46.366 ************************************ 00:27:46.625 05:08:09 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:27:46.625 05:08:09 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:27:46.625 05:08:09 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:27:46.625 05:08:09 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:27:46.625 05:08:09 -- bdev/blockdev.sh@809 -- # cleanup 00:27:46.625 05:08:09 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:46.625 05:08:09 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:46.625 05:08:09 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:27:46.625 05:08:09 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:27:46.625 05:08:09 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:27:46.625 05:08:09 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:27:46.625 00:27:46.625 real 0m37.677s 00:27:46.625 user 1m0.600s 00:27:46.625 sys 0m3.241s 00:27:46.625 05:08:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:46.625 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:46.625 ************************************ 00:27:46.625 END TEST blockdev_nvme 00:27:46.625 ************************************ 00:27:46.625 05:08:09 -- spdk/autotest.sh@206 -- # uname -s 00:27:46.625 05:08:09 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:27:46.625 05:08:09 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:27:46.625 05:08:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:46.625 05:08:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:46.625 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:46.625 ************************************ 00:27:46.625 START TEST blockdev_nvme_gpt 00:27:46.625 ************************************ 00:27:46.625 05:08:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:27:46.625 * Looking for test storage... 00:27:46.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:46.625 05:08:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:46.625 05:08:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:46.625 05:08:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:46.625 05:08:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:46.625 05:08:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:46.625 05:08:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:46.625 05:08:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:46.625 05:08:10 -- scripts/common.sh@335 -- # IFS=.-: 00:27:46.625 05:08:10 -- scripts/common.sh@335 -- # read -ra ver1 00:27:46.625 05:08:10 -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.625 05:08:10 -- scripts/common.sh@336 -- # read -ra ver2 00:27:46.625 05:08:10 -- scripts/common.sh@337 -- # local 'op=<' 00:27:46.625 05:08:10 -- scripts/common.sh@339 -- # ver1_l=2 00:27:46.625 05:08:10 -- scripts/common.sh@340 -- # ver2_l=1 00:27:46.625 05:08:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:46.625 05:08:10 -- scripts/common.sh@343 -- # case "$op" in 00:27:46.625 05:08:10 -- scripts/common.sh@344 -- # : 1 00:27:46.625 05:08:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:46.625 05:08:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.625 05:08:10 -- scripts/common.sh@364 -- # decimal 1 00:27:46.625 05:08:10 -- scripts/common.sh@352 -- # local d=1 00:27:46.625 05:08:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.625 05:08:10 -- scripts/common.sh@354 -- # echo 1 00:27:46.625 05:08:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:46.625 05:08:10 -- scripts/common.sh@365 -- # decimal 2 00:27:46.625 05:08:10 -- scripts/common.sh@352 -- # local d=2 00:27:46.625 05:08:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.625 05:08:10 -- scripts/common.sh@354 -- # echo 2 00:27:46.625 05:08:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:46.625 05:08:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:46.625 05:08:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:46.625 05:08:10 -- scripts/common.sh@367 -- # return 0 00:27:46.625 05:08:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.625 05:08:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:46.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.625 --rc genhtml_branch_coverage=1 00:27:46.625 --rc genhtml_function_coverage=1 00:27:46.625 --rc genhtml_legend=1 00:27:46.625 --rc geninfo_all_blocks=1 00:27:46.625 --rc geninfo_unexecuted_blocks=1 00:27:46.625 00:27:46.625 ' 00:27:46.625 05:08:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:46.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.625 --rc genhtml_branch_coverage=1 00:27:46.625 --rc genhtml_function_coverage=1 00:27:46.625 --rc genhtml_legend=1 00:27:46.625 --rc geninfo_all_blocks=1 00:27:46.625 --rc geninfo_unexecuted_blocks=1 00:27:46.625 00:27:46.625 ' 00:27:46.625 05:08:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:46.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.625 --rc genhtml_branch_coverage=1 00:27:46.625 --rc genhtml_function_coverage=1 00:27:46.625 --rc genhtml_legend=1 00:27:46.625 --rc geninfo_all_blocks=1 00:27:46.625 --rc geninfo_unexecuted_blocks=1 00:27:46.625 00:27:46.625 ' 00:27:46.625 05:08:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:46.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.625 --rc genhtml_branch_coverage=1 00:27:46.625 --rc genhtml_function_coverage=1 00:27:46.625 --rc genhtml_legend=1 00:27:46.625 --rc geninfo_all_blocks=1 00:27:46.625 --rc geninfo_unexecuted_blocks=1 00:27:46.625 00:27:46.625 ' 00:27:46.625 05:08:10 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:46.625 05:08:10 -- bdev/nbd_common.sh@6 -- # set -e 00:27:46.625 05:08:10 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:46.625 05:08:10 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:46.625 05:08:10 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:46.625 05:08:10 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:46.625 05:08:10 -- bdev/blockdev.sh@18 -- # : 00:27:46.625 05:08:10 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:46.625 05:08:10 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:46.625 05:08:10 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:46.625 05:08:10 -- bdev/blockdev.sh@672 -- # uname -s 00:27:46.625 05:08:10 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:46.625 05:08:10 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:46.625 05:08:10 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:27:46.625 05:08:10 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:46.625 05:08:10 -- bdev/blockdev.sh@682 -- # dek= 00:27:46.625 05:08:10 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:46.625 05:08:10 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:46.625 05:08:10 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:46.625 05:08:10 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:27:46.625 05:08:10 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:27:46.625 05:08:10 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:46.625 05:08:10 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=91558 00:27:46.625 05:08:10 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:46.625 05:08:10 -- bdev/blockdev.sh@47 -- # waitforlisten 91558 00:27:46.625 05:08:10 -- common/autotest_common.sh@829 -- # '[' -z 91558 ']' 00:27:46.625 05:08:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.625 05:08:10 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:46.625 05:08:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:46.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.625 05:08:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.625 05:08:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:46.625 05:08:10 -- common/autotest_common.sh@10 -- # set +x 00:27:46.884 [2024-11-18 05:08:10.207731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:46.884 [2024-11-18 05:08:10.208382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91558 ] 00:27:46.884 [2024-11-18 05:08:10.376553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.143 [2024-11-18 05:08:10.532060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:47.143 [2024-11-18 05:08:10.532306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.711 05:08:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.711 05:08:11 -- common/autotest_common.sh@862 -- # return 0 00:27:47.711 05:08:11 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:47.711 05:08:11 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:27:47.711 05:08:11 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:47.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:27:47.971 Waiting for block devices as requested 00:27:48.230 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:48.230 05:08:11 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:27:48.230 05:08:11 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:27:48.230 05:08:11 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:27:48.230 05:08:11 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:27:48.230 05:08:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:27:48.230 05:08:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:27:48.230 05:08:11 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:27:48.230 05:08:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:48.230 05:08:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:27:48.230 05:08:11 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:27:48.230 05:08:11 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:27:48.230 05:08:11 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:27:48.230 05:08:11 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:27:48.230 05:08:11 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:27:48.230 05:08:11 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:27:48.230 05:08:11 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:27:48.230 05:08:11 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:27:48.230 BYT; 00:27:48.230 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:27:48.230 05:08:11 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:27:48.230 BYT; 00:27:48.230 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:27:48.231 05:08:11 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:27:48.231 05:08:11 -- bdev/blockdev.sh@114 -- # break 00:27:48.231 05:08:11 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:27:48.231 05:08:11 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:27:48.231 05:08:11 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:27:48.231 05:08:11 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:27:48.490 05:08:11 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:27:48.490 05:08:11 -- scripts/common.sh@410 -- # local spdk_guid 00:27:48.490 05:08:11 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:27:48.490 05:08:11 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:48.490 05:08:11 -- scripts/common.sh@415 -- # IFS='()' 00:27:48.490 05:08:11 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:27:48.490 05:08:11 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:48.490 05:08:11 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:27:48.490 05:08:11 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:27:48.490 05:08:11 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:27:48.490 05:08:11 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:27:48.490 05:08:11 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:27:48.490 05:08:11 -- scripts/common.sh@422 -- # local spdk_guid 00:27:48.490 05:08:11 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:27:48.490 05:08:11 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:48.490 05:08:11 -- scripts/common.sh@427 -- # IFS='()' 00:27:48.490 05:08:11 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:27:48.490 05:08:11 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:48.490 05:08:11 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:27:48.490 05:08:11 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:27:48.490 05:08:11 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:27:48.490 05:08:11 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:27:48.490 05:08:11 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:27:49.427 The operation has completed successfully. 00:27:49.427 05:08:12 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:27:50.364 The operation has completed successfully. 00:27:50.364 05:08:13 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:50.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:27:50.932 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:51.500 05:08:14 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:27:51.500 05:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.500 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:27:51.500 [] 00:27:51.500 05:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.500 05:08:14 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:27:51.500 05:08:14 -- bdev/blockdev.sh@79 -- # local json 00:27:51.500 05:08:14 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:51.500 05:08:14 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:51.500 05:08:14 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:51.500 05:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.500 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:27:51.500 05:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.500 05:08:14 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:51.500 05:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.500 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:27:51.500 05:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.500 05:08:14 -- bdev/blockdev.sh@738 -- # cat 00:27:51.500 05:08:14 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:51.500 05:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.500 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:27:51.500 05:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.500 05:08:14 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:51.500 05:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.500 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:27:51.500 05:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.500 05:08:14 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:51.500 05:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.500 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:27:51.500 05:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.500 05:08:14 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:51.500 05:08:14 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:51.500 05:08:14 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:51.500 05:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.500 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:27:51.760 05:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.760 05:08:15 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:51.760 05:08:15 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:51.760 05:08:15 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:27:51.760 05:08:15 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:51.760 05:08:15 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:27:51.760 05:08:15 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:51.760 05:08:15 -- bdev/blockdev.sh@752 -- # killprocess 91558 00:27:51.760 05:08:15 -- common/autotest_common.sh@936 -- # '[' -z 91558 ']' 00:27:51.760 05:08:15 -- common/autotest_common.sh@940 -- # kill -0 91558 00:27:51.760 05:08:15 -- common/autotest_common.sh@941 -- # uname 00:27:51.760 05:08:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:51.760 05:08:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91558 00:27:51.760 05:08:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:51.760 05:08:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:51.760 killing process with pid 91558 00:27:51.760 05:08:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91558' 00:27:51.760 05:08:15 -- common/autotest_common.sh@955 -- # kill 91558 00:27:51.760 05:08:15 -- common/autotest_common.sh@960 -- # wait 91558 00:27:53.665 05:08:16 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:53.666 05:08:16 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:27:53.666 05:08:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:27:53.666 05:08:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:53.666 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:27:53.666 ************************************ 00:27:53.666 START TEST bdev_hello_world 00:27:53.666 ************************************ 00:27:53.666 05:08:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:27:53.666 [2024-11-18 05:08:16.826030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:53.666 [2024-11-18 05:08:16.826204] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91958 ] 00:27:53.666 [2024-11-18 05:08:16.995104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.666 [2024-11-18 05:08:17.147406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.235 [2024-11-18 05:08:17.489369] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:54.235 [2024-11-18 05:08:17.489455] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:27:54.235 [2024-11-18 05:08:17.489477] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:54.235 [2024-11-18 05:08:17.491952] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:54.235 [2024-11-18 05:08:17.492511] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:54.235 [2024-11-18 05:08:17.492556] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:54.235 [2024-11-18 05:08:17.492782] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:54.235 00:27:54.235 [2024-11-18 05:08:17.492829] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:55.173 00:27:55.173 real 0m1.637s 00:27:55.173 user 0m1.345s 00:27:55.173 sys 0m0.192s 00:27:55.173 05:08:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:55.173 05:08:18 -- common/autotest_common.sh@10 -- # set +x 00:27:55.173 ************************************ 00:27:55.173 END TEST bdev_hello_world 00:27:55.173 ************************************ 00:27:55.173 05:08:18 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:55.173 05:08:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:55.173 05:08:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:55.173 05:08:18 -- common/autotest_common.sh@10 -- # set +x 00:27:55.173 ************************************ 00:27:55.173 START TEST bdev_bounds 00:27:55.173 ************************************ 00:27:55.173 05:08:18 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:27:55.173 05:08:18 -- bdev/blockdev.sh@288 -- # bdevio_pid=92000 00:27:55.173 05:08:18 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:55.173 05:08:18 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 92000' 00:27:55.173 Process bdevio pid: 92000 00:27:55.173 05:08:18 -- bdev/blockdev.sh@291 -- # waitforlisten 92000 00:27:55.173 05:08:18 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:55.173 05:08:18 -- common/autotest_common.sh@829 -- # '[' -z 92000 ']' 00:27:55.173 05:08:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.173 05:08:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:55.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.173 05:08:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.173 05:08:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:55.173 05:08:18 -- common/autotest_common.sh@10 -- # set +x 00:27:55.173 [2024-11-18 05:08:18.516248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:55.173 [2024-11-18 05:08:18.516415] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92000 ] 00:27:55.173 [2024-11-18 05:08:18.685001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:55.432 [2024-11-18 05:08:18.837528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.433 [2024-11-18 05:08:18.837646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.433 [2024-11-18 05:08:18.837668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.001 05:08:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:56.001 05:08:19 -- common/autotest_common.sh@862 -- # return 0 00:27:56.001 05:08:19 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:56.260 I/O targets: 00:27:56.260 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:27:56.260 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:27:56.260 00:27:56.260 00:27:56.260 CUnit - A unit testing framework for C - Version 2.1-3 00:27:56.260 http://cunit.sourceforge.net/ 00:27:56.260 00:27:56.260 00:27:56.260 Suite: bdevio tests on: Nvme0n1p2 00:27:56.260 Test: blockdev write read block ...passed 00:27:56.260 Test: blockdev write zeroes read block ...passed 00:27:56.260 Test: blockdev write zeroes read no split ...passed 00:27:56.260 Test: blockdev write zeroes read split ...passed 00:27:56.260 Test: blockdev write zeroes read split partial ...passed 00:27:56.260 Test: blockdev reset ...[2024-11-18 05:08:19.608195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:56.260 passed 00:27:56.260 Test: blockdev write read 8 blocks ...[2024-11-18 05:08:19.611799] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:56.260 passed 00:27:56.260 Test: blockdev write read size > 128k ...passed 00:27:56.260 Test: blockdev write read invalid size ...passed 00:27:56.260 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:56.260 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:56.260 Test: blockdev write read max offset ...passed 00:27:56.260 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:56.260 Test: blockdev writev readv 8 blocks ...passed 00:27:56.260 Test: blockdev writev readv 30 x 1block ...passed 00:27:56.260 Test: blockdev writev readv block ...passed 00:27:56.260 Test: blockdev writev readv size > 128k ...passed 00:27:56.260 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:56.260 Test: blockdev comparev and writev ...[2024-11-18 05:08:19.622832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x28b20b000 len:0x1000 00:27:56.260 [2024-11-18 05:08:19.622901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:56.260 passed 00:27:56.260 Test: blockdev nvme passthru rw ...passed 00:27:56.260 Test: blockdev nvme passthru vendor specific ...passed 00:27:56.260 Test: blockdev nvme admin passthru ...passed 00:27:56.260 Test: blockdev copy ...passed 00:27:56.260 Suite: bdevio tests on: Nvme0n1p1 00:27:56.260 Test: blockdev write read block ...passed 00:27:56.260 Test: blockdev write zeroes read block ...passed 00:27:56.260 Test: blockdev write zeroes read no split ...passed 00:27:56.260 Test: blockdev write zeroes read split ...passed 00:27:56.261 Test: blockdev write zeroes read split partial ...passed 00:27:56.261 Test: blockdev reset ...[2024-11-18 05:08:19.693156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:56.261 passed 00:27:56.261 Test: blockdev write read 8 blocks ...[2024-11-18 05:08:19.696508] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:56.261 passed 00:27:56.261 Test: blockdev write read size > 128k ...passed 00:27:56.261 Test: blockdev write read invalid size ...passed 00:27:56.261 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:56.261 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:56.261 Test: blockdev write read max offset ...passed 00:27:56.261 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:56.261 Test: blockdev writev readv 8 blocks ...passed 00:27:56.261 Test: blockdev writev readv 30 x 1block ...passed 00:27:56.261 Test: blockdev writev readv block ...passed 00:27:56.261 Test: blockdev writev readv size > 128k ...passed 00:27:56.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:56.261 Test: blockdev comparev and writev ...[2024-11-18 05:08:19.707159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x28b20d000 len:0x1000 00:27:56.261 [2024-11-18 05:08:19.707231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:56.261 passed 00:27:56.261 Test: blockdev nvme passthru rw ...passed 00:27:56.261 Test: blockdev nvme passthru vendor specific ...passed 00:27:56.261 Test: blockdev nvme admin passthru ...passed 00:27:56.261 Test: blockdev copy ...passed 00:27:56.261 00:27:56.261 Run Summary: Type Total Ran Passed Failed Inactive 00:27:56.261 suites 2 2 n/a 0 0 00:27:56.261 tests 46 46 46 0 0 00:27:56.261 asserts 284 284 284 0 n/a 00:27:56.261 00:27:56.261 Elapsed time = 0.482 seconds 00:27:56.261 0 00:27:56.261 05:08:19 -- bdev/blockdev.sh@293 -- # killprocess 92000 00:27:56.261 05:08:19 -- common/autotest_common.sh@936 -- # '[' -z 92000 ']' 00:27:56.261 05:08:19 -- common/autotest_common.sh@940 -- # kill -0 92000 00:27:56.261 05:08:19 -- common/autotest_common.sh@941 -- # uname 00:27:56.261 05:08:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:56.261 05:08:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92000 00:27:56.261 05:08:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:56.261 05:08:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:56.261 killing process with pid 92000 00:27:56.261 05:08:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92000' 00:27:56.261 05:08:19 -- common/autotest_common.sh@955 -- # kill 92000 00:27:56.261 05:08:19 -- common/autotest_common.sh@960 -- # wait 92000 00:27:57.199 05:08:20 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:57.199 00:27:57.199 real 0m2.252s 00:27:57.199 user 0m5.427s 00:27:57.199 sys 0m0.324s 00:27:57.199 05:08:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:57.199 05:08:20 -- common/autotest_common.sh@10 -- # set +x 00:27:57.199 ************************************ 00:27:57.199 END TEST bdev_bounds 00:27:57.199 ************************************ 00:27:57.459 05:08:20 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:27:57.459 05:08:20 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:57.459 05:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:57.459 05:08:20 -- common/autotest_common.sh@10 -- # set +x 00:27:57.459 ************************************ 00:27:57.459 START TEST bdev_nbd 00:27:57.459 ************************************ 00:27:57.459 05:08:20 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:27:57.459 05:08:20 -- bdev/blockdev.sh@298 -- # uname -s 00:27:57.459 05:08:20 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:57.459 05:08:20 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:57.459 05:08:20 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:57.459 05:08:20 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:27:57.459 05:08:20 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:57.459 05:08:20 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:27:57.459 05:08:20 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:57.459 05:08:20 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:57.459 05:08:20 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:57.459 05:08:20 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:27:57.459 05:08:20 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:57.459 05:08:20 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:57.459 05:08:20 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:57.459 05:08:20 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:57.459 05:08:20 -- bdev/blockdev.sh@316 -- # nbd_pid=92053 00:27:57.459 05:08:20 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:57.459 05:08:20 -- bdev/blockdev.sh@318 -- # waitforlisten 92053 /var/tmp/spdk-nbd.sock 00:27:57.459 05:08:20 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:57.459 05:08:20 -- common/autotest_common.sh@829 -- # '[' -z 92053 ']' 00:27:57.459 05:08:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:57.459 05:08:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:57.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:57.459 05:08:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:57.459 05:08:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:57.459 05:08:20 -- common/autotest_common.sh@10 -- # set +x 00:27:57.459 [2024-11-18 05:08:20.823601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:57.459 [2024-11-18 05:08:20.823751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.718 [2024-11-18 05:08:20.992172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.718 [2024-11-18 05:08:21.139881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.285 05:08:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:58.285 05:08:21 -- common/autotest_common.sh@862 -- # return 0 00:27:58.285 05:08:21 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@24 -- # local i 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:58.285 05:08:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:27:58.543 05:08:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:58.543 05:08:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:58.543 05:08:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:58.543 05:08:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:58.543 05:08:21 -- common/autotest_common.sh@867 -- # local i 00:27:58.543 05:08:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:58.543 05:08:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:58.543 05:08:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:58.543 05:08:21 -- common/autotest_common.sh@871 -- # break 00:27:58.543 05:08:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:58.543 05:08:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:58.543 05:08:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:58.543 1+0 records in 00:27:58.543 1+0 records out 00:27:58.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00556796 s, 736 kB/s 00:27:58.543 05:08:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:58.543 05:08:21 -- common/autotest_common.sh@884 -- # size=4096 00:27:58.543 05:08:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:58.543 05:08:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:58.543 05:08:21 -- common/autotest_common.sh@887 -- # return 0 00:27:58.543 05:08:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:58.543 05:08:21 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:58.543 05:08:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:27:58.802 05:08:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:58.802 05:08:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:58.802 05:08:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:58.802 05:08:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:58.802 05:08:22 -- common/autotest_common.sh@867 -- # local i 00:27:58.802 05:08:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:58.802 05:08:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:58.802 05:08:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:58.802 05:08:22 -- common/autotest_common.sh@871 -- # break 00:27:58.802 05:08:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:58.802 05:08:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:58.802 05:08:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:58.802 1+0 records in 00:27:58.802 1+0 records out 00:27:58.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654698 s, 6.3 MB/s 00:27:58.802 05:08:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:58.802 05:08:22 -- common/autotest_common.sh@884 -- # size=4096 00:27:58.802 05:08:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:58.802 05:08:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:58.802 05:08:22 -- common/autotest_common.sh@887 -- # return 0 00:27:58.802 05:08:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:58.802 05:08:22 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:58.802 05:08:22 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:59.061 { 00:27:59.061 "nbd_device": "/dev/nbd0", 00:27:59.061 "bdev_name": "Nvme0n1p1" 00:27:59.061 }, 00:27:59.061 { 00:27:59.061 "nbd_device": "/dev/nbd1", 00:27:59.061 "bdev_name": "Nvme0n1p2" 00:27:59.061 } 00:27:59.061 ]' 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:59.061 { 00:27:59.061 "nbd_device": "/dev/nbd0", 00:27:59.061 "bdev_name": "Nvme0n1p1" 00:27:59.061 }, 00:27:59.061 { 00:27:59.061 "nbd_device": "/dev/nbd1", 00:27:59.061 "bdev_name": "Nvme0n1p2" 00:27:59.061 } 00:27:59.061 ]' 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@51 -- # local i 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:59.061 05:08:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:59.320 05:08:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:59.320 05:08:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:59.320 05:08:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:59.320 05:08:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:59.320 05:08:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:59.320 05:08:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:59.320 05:08:22 -- bdev/nbd_common.sh@41 -- # break 00:27:59.320 05:08:22 -- bdev/nbd_common.sh@45 -- # return 0 00:27:59.320 05:08:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:59.320 05:08:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@41 -- # break 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@45 -- # return 0 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:59.579 05:08:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@65 -- # true 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@65 -- # count=0 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@122 -- # count=0 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@127 -- # return 0 00:27:59.839 05:08:23 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@12 -- # local i 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:59.839 05:08:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:28:00.098 /dev/nbd0 00:28:00.098 05:08:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:00.098 05:08:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:00.098 05:08:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:00.098 05:08:23 -- common/autotest_common.sh@867 -- # local i 00:28:00.098 05:08:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:00.098 05:08:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:00.098 05:08:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:00.098 05:08:23 -- common/autotest_common.sh@871 -- # break 00:28:00.098 05:08:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:00.098 05:08:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:00.098 05:08:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:00.098 1+0 records in 00:28:00.098 1+0 records out 00:28:00.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459711 s, 8.9 MB/s 00:28:00.098 05:08:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.098 05:08:23 -- common/autotest_common.sh@884 -- # size=4096 00:28:00.098 05:08:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.098 05:08:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:00.098 05:08:23 -- common/autotest_common.sh@887 -- # return 0 00:28:00.098 05:08:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:00.098 05:08:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:00.098 05:08:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:28:00.357 /dev/nbd1 00:28:00.357 05:08:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:00.357 05:08:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:00.357 05:08:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:00.357 05:08:23 -- common/autotest_common.sh@867 -- # local i 00:28:00.357 05:08:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:00.357 05:08:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:00.357 05:08:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:00.357 05:08:23 -- common/autotest_common.sh@871 -- # break 00:28:00.357 05:08:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:00.357 05:08:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:00.357 05:08:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:00.357 1+0 records in 00:28:00.357 1+0 records out 00:28:00.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453309 s, 9.0 MB/s 00:28:00.357 05:08:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.357 05:08:23 -- common/autotest_common.sh@884 -- # size=4096 00:28:00.357 05:08:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.357 05:08:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:00.357 05:08:23 -- common/autotest_common.sh@887 -- # return 0 00:28:00.357 05:08:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:00.357 05:08:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:00.357 05:08:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:00.357 05:08:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:00.357 05:08:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:00.616 { 00:28:00.616 "nbd_device": "/dev/nbd0", 00:28:00.616 "bdev_name": "Nvme0n1p1" 00:28:00.616 }, 00:28:00.616 { 00:28:00.616 "nbd_device": "/dev/nbd1", 00:28:00.616 "bdev_name": "Nvme0n1p2" 00:28:00.616 } 00:28:00.616 ]' 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:00.616 { 00:28:00.616 "nbd_device": "/dev/nbd0", 00:28:00.616 "bdev_name": "Nvme0n1p1" 00:28:00.616 }, 00:28:00.616 { 00:28:00.616 "nbd_device": "/dev/nbd1", 00:28:00.616 "bdev_name": "Nvme0n1p2" 00:28:00.616 } 00:28:00.616 ]' 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:00.616 /dev/nbd1' 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:00.616 /dev/nbd1' 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@65 -- # count=2 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@95 -- # count=2 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:00.616 256+0 records in 00:28:00.616 256+0 records out 00:28:00.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00842224 s, 125 MB/s 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:00.616 05:08:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:00.616 256+0 records in 00:28:00.616 256+0 records out 00:28:00.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10933 s, 9.6 MB/s 00:28:00.616 05:08:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:00.616 05:08:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:00.875 256+0 records in 00:28:00.875 256+0 records out 00:28:00.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.084151 s, 12.5 MB/s 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@51 -- # local i 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:00.875 05:08:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:01.134 05:08:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:01.134 05:08:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:01.134 05:08:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:01.134 05:08:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:01.134 05:08:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:01.134 05:08:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:01.134 05:08:24 -- bdev/nbd_common.sh@41 -- # break 00:28:01.134 05:08:24 -- bdev/nbd_common.sh@45 -- # return 0 00:28:01.134 05:08:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:01.134 05:08:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@41 -- # break 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@45 -- # return 0 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:01.393 05:08:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@65 -- # true 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@65 -- # count=0 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@104 -- # count=0 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@109 -- # return 0 00:28:01.652 05:08:24 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:01.652 05:08:24 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:01.652 malloc_lvol_verify 00:28:01.653 05:08:25 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:01.911 238113fe-a075-40e2-995b-3808c0c45edb 00:28:01.911 05:08:25 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:02.170 7ab6da23-836a-4d4d-98f2-bc2994a482a2 00:28:02.170 05:08:25 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:02.441 /dev/nbd0 00:28:02.441 05:08:25 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:02.441 mke2fs 1.47.0 (5-Feb-2023) 00:28:02.441 00:28:02.441 Filesystem too small for a journal 00:28:02.441 Discarding device blocks: 0/1024 done 00:28:02.441 Creating filesystem with 1024 4k blocks and 1024 inodes 00:28:02.441 00:28:02.441 Allocating group tables: 0/1 done 00:28:02.441 Writing inode tables: 0/1 done 00:28:02.441 Writing superblocks and filesystem accounting information: 0/1 done 00:28:02.441 00:28:02.441 05:08:25 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:02.441 05:08:25 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:02.441 05:08:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:02.441 05:08:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:02.441 05:08:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:02.441 05:08:25 -- bdev/nbd_common.sh@51 -- # local i 00:28:02.441 05:08:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:02.441 05:08:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:02.704 05:08:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:02.704 05:08:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:02.704 05:08:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:02.704 05:08:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:02.704 05:08:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:02.704 05:08:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:02.704 05:08:26 -- bdev/nbd_common.sh@41 -- # break 00:28:02.704 05:08:26 -- bdev/nbd_common.sh@45 -- # return 0 00:28:02.704 05:08:26 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:02.704 05:08:26 -- bdev/nbd_common.sh@147 -- # return 0 00:28:02.704 05:08:26 -- bdev/blockdev.sh@324 -- # killprocess 92053 00:28:02.704 05:08:26 -- common/autotest_common.sh@936 -- # '[' -z 92053 ']' 00:28:02.704 05:08:26 -- common/autotest_common.sh@940 -- # kill -0 92053 00:28:02.704 05:08:26 -- common/autotest_common.sh@941 -- # uname 00:28:02.704 05:08:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:02.704 05:08:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92053 00:28:02.704 05:08:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:02.704 05:08:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:02.704 killing process with pid 92053 00:28:02.704 05:08:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92053' 00:28:02.704 05:08:26 -- common/autotest_common.sh@955 -- # kill 92053 00:28:02.704 05:08:26 -- common/autotest_common.sh@960 -- # wait 92053 00:28:03.641 05:08:26 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:28:03.641 00:28:03.641 real 0m6.239s 00:28:03.641 user 0m9.031s 00:28:03.641 sys 0m1.511s 00:28:03.641 05:08:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:03.641 05:08:26 -- common/autotest_common.sh@10 -- # set +x 00:28:03.641 ************************************ 00:28:03.641 END TEST bdev_nbd 00:28:03.641 ************************************ 00:28:03.641 05:08:27 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:28:03.641 05:08:27 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:28:03.641 05:08:27 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:28:03.641 skipping fio tests on NVMe due to multi-ns failures. 00:28:03.641 05:08:27 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:28:03.641 05:08:27 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:03.641 05:08:27 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:03.641 05:08:27 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:28:03.641 05:08:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:03.641 05:08:27 -- common/autotest_common.sh@10 -- # set +x 00:28:03.641 ************************************ 00:28:03.641 START TEST bdev_verify 00:28:03.641 ************************************ 00:28:03.641 05:08:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:03.641 [2024-11-18 05:08:27.099667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:03.641 [2024-11-18 05:08:27.099816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92285 ] 00:28:03.900 [2024-11-18 05:08:27.252455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:03.900 [2024-11-18 05:08:27.400330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.900 [2024-11-18 05:08:27.400353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.468 Running I/O for 5 seconds... 00:28:09.754 00:28:09.754 Latency(us) 00:28:09.754 [2024-11-18T05:08:33.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.754 [2024-11-18T05:08:33.278Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:09.754 Verification LBA range: start 0x0 length 0x4ff80 00:28:09.754 Nvme0n1p1 : 5.02 7519.60 29.37 0.00 0.00 16974.99 1489.45 24903.68 00:28:09.754 [2024-11-18T05:08:33.278Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:09.754 Verification LBA range: start 0x4ff80 length 0x4ff80 00:28:09.754 Nvme0n1p1 : 5.02 7557.88 29.52 0.00 0.00 16889.43 2636.33 25380.31 00:28:09.754 [2024-11-18T05:08:33.278Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:09.754 Verification LBA range: start 0x0 length 0x4ff7f 00:28:09.754 Nvme0n1p2 : 5.01 7517.07 29.36 0.00 0.00 16972.20 1779.90 25856.93 00:28:09.754 [2024-11-18T05:08:33.278Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:09.754 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:28:09.754 Nvme0n1p2 : 5.02 7565.57 29.55 0.00 0.00 16847.39 938.36 25022.84 00:28:09.754 [2024-11-18T05:08:33.278Z] =================================================================================================================== 00:28:09.754 [2024-11-18T05:08:33.278Z] Total : 30160.13 117.81 0.00 0.00 16920.82 938.36 25856.93 00:28:12.318 00:28:12.318 real 0m8.733s 00:28:12.318 user 0m16.482s 00:28:12.318 sys 0m0.202s 00:28:12.318 05:08:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:12.318 05:08:35 -- common/autotest_common.sh@10 -- # set +x 00:28:12.318 ************************************ 00:28:12.318 END TEST bdev_verify 00:28:12.318 ************************************ 00:28:12.318 05:08:35 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:12.318 05:08:35 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:28:12.318 05:08:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:12.318 05:08:35 -- common/autotest_common.sh@10 -- # set +x 00:28:12.318 ************************************ 00:28:12.318 START TEST bdev_verify_big_io 00:28:12.318 ************************************ 00:28:12.318 05:08:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:12.577 [2024-11-18 05:08:35.884968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:12.577 [2024-11-18 05:08:35.885102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92379 ] 00:28:12.577 [2024-11-18 05:08:36.037089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:12.836 [2024-11-18 05:08:36.192062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.836 [2024-11-18 05:08:36.192075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.095 Running I/O for 5 seconds... 00:28:18.368 00:28:18.368 Latency(us) 00:28:18.368 [2024-11-18T05:08:41.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.368 [2024-11-18T05:08:41.892Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:18.368 Verification LBA range: start 0x0 length 0x4ff8 00:28:18.368 Nvme0n1p1 : 5.10 803.65 50.23 0.00 0.00 157569.04 2234.18 222107.46 00:28:18.368 [2024-11-18T05:08:41.892Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:18.368 Verification LBA range: start 0x4ff8 length 0x4ff8 00:28:18.368 Nvme0n1p1 : 5.09 938.38 58.65 0.00 0.00 134907.54 2055.45 197322.94 00:28:18.368 [2024-11-18T05:08:41.892Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:18.368 Verification LBA range: start 0x0 length 0x4ff7 00:28:18.368 Nvme0n1p2 : 5.10 803.40 50.21 0.00 0.00 155279.78 2472.49 168725.41 00:28:18.368 [2024-11-18T05:08:41.892Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:18.368 Verification LBA range: start 0x4ff7 length 0x4ff7 00:28:18.368 Nvme0n1p2 : 5.10 946.31 59.14 0.00 0.00 132432.08 1333.06 157286.40 00:28:18.368 [2024-11-18T05:08:41.892Z] =================================================================================================================== 00:28:18.368 [2024-11-18T05:08:41.892Z] Total : 3491.74 218.23 0.00 0.00 144149.21 1333.06 222107.46 00:28:19.746 00:28:19.747 real 0m7.125s 00:28:19.747 user 0m13.240s 00:28:19.747 sys 0m0.202s 00:28:19.747 05:08:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:19.747 05:08:42 -- common/autotest_common.sh@10 -- # set +x 00:28:19.747 ************************************ 00:28:19.747 END TEST bdev_verify_big_io 00:28:19.747 ************************************ 00:28:19.747 05:08:43 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:19.747 05:08:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:19.747 05:08:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:19.747 05:08:43 -- common/autotest_common.sh@10 -- # set +x 00:28:19.747 ************************************ 00:28:19.747 START TEST bdev_write_zeroes 00:28:19.747 ************************************ 00:28:19.747 05:08:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:19.747 [2024-11-18 05:08:43.053151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:19.747 [2024-11-18 05:08:43.053289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92478 ] 00:28:19.747 [2024-11-18 05:08:43.204472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.006 [2024-11-18 05:08:43.352037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.264 Running I/O for 1 seconds... 00:28:21.198 00:28:21.198 Latency(us) 00:28:21.198 [2024-11-18T05:08:44.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.198 [2024-11-18T05:08:44.722Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:21.198 Nvme0n1p1 : 1.01 21874.54 85.45 0.00 0.00 5837.98 3500.22 12630.57 00:28:21.198 [2024-11-18T05:08:44.722Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:21.198 Nvme0n1p2 : 1.01 21837.83 85.30 0.00 0.00 5838.63 2546.97 12451.84 00:28:21.198 [2024-11-18T05:08:44.722Z] =================================================================================================================== 00:28:21.198 [2024-11-18T05:08:44.722Z] Total : 43712.36 170.75 0.00 0.00 5838.30 2546.97 12630.57 00:28:22.136 00:28:22.136 real 0m2.624s 00:28:22.136 user 0m2.339s 00:28:22.136 sys 0m0.185s 00:28:22.136 05:08:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:22.136 ************************************ 00:28:22.136 END TEST bdev_write_zeroes 00:28:22.136 ************************************ 00:28:22.136 05:08:45 -- common/autotest_common.sh@10 -- # set +x 00:28:22.395 05:08:45 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:22.395 05:08:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:22.395 05:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:22.395 05:08:45 -- common/autotest_common.sh@10 -- # set +x 00:28:22.395 ************************************ 00:28:22.395 START TEST bdev_json_nonenclosed 00:28:22.395 ************************************ 00:28:22.395 05:08:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:22.395 [2024-11-18 05:08:45.746628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:22.395 [2024-11-18 05:08:45.746798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92520 ] 00:28:22.654 [2024-11-18 05:08:45.916999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.654 [2024-11-18 05:08:46.071788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.654 [2024-11-18 05:08:46.071961] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:22.654 [2024-11-18 05:08:46.071991] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:22.913 00:28:22.913 real 0m0.721s 00:28:22.913 user 0m0.498s 00:28:22.913 sys 0m0.122s 00:28:22.913 05:08:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:22.913 05:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:22.913 ************************************ 00:28:22.913 END TEST bdev_json_nonenclosed 00:28:22.913 ************************************ 00:28:23.172 05:08:46 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:23.172 05:08:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:23.172 05:08:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:23.172 05:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:23.172 ************************************ 00:28:23.172 START TEST bdev_json_nonarray 00:28:23.172 ************************************ 00:28:23.172 05:08:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:23.172 [2024-11-18 05:08:46.518784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:23.172 [2024-11-18 05:08:46.518971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92550 ] 00:28:23.172 [2024-11-18 05:08:46.688210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.431 [2024-11-18 05:08:46.838097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.431 [2024-11-18 05:08:46.838312] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:23.431 [2024-11-18 05:08:46.838343] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:23.690 00:28:23.690 real 0m0.717s 00:28:23.690 user 0m0.497s 00:28:23.690 sys 0m0.120s 00:28:23.690 05:08:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:23.690 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:28:23.690 ************************************ 00:28:23.690 END TEST bdev_json_nonarray 00:28:23.690 ************************************ 00:28:23.949 05:08:47 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:28:23.949 05:08:47 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:28:23.949 05:08:47 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:28:23.949 05:08:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:23.949 05:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:23.949 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:28:23.949 ************************************ 00:28:23.949 START TEST bdev_gpt_uuid 00:28:23.949 ************************************ 00:28:23.949 05:08:47 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:28:23.949 05:08:47 -- bdev/blockdev.sh@612 -- # local bdev 00:28:23.949 05:08:47 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:28:23.949 05:08:47 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=92577 00:28:23.949 05:08:47 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:23.949 05:08:47 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:23.949 05:08:47 -- bdev/blockdev.sh@47 -- # waitforlisten 92577 00:28:23.949 05:08:47 -- common/autotest_common.sh@829 -- # '[' -z 92577 ']' 00:28:23.949 05:08:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.949 05:08:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.949 05:08:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.949 05:08:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.949 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:28:23.949 [2024-11-18 05:08:47.288832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:23.949 [2024-11-18 05:08:47.288990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92577 ] 00:28:23.950 [2024-11-18 05:08:47.444474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.208 [2024-11-18 05:08:47.595487] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:24.208 [2024-11-18 05:08:47.595764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.775 05:08:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.775 05:08:48 -- common/autotest_common.sh@862 -- # return 0 00:28:24.775 05:08:48 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:24.775 05:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.775 05:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:25.035 Some configs were skipped because the RPC state that can call them passed over. 00:28:25.035 05:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.035 05:08:48 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:28:25.035 05:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.035 05:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:25.035 05:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.035 05:08:48 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:28:25.035 05:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.035 05:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:25.035 05:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.035 05:08:48 -- bdev/blockdev.sh@619 -- # bdev='[ 00:28:25.035 { 00:28:25.035 "name": "Nvme0n1p1", 00:28:25.035 "aliases": [ 00:28:25.035 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:28:25.035 ], 00:28:25.035 "product_name": "GPT Disk", 00:28:25.035 "block_size": 4096, 00:28:25.035 "num_blocks": 655104, 00:28:25.035 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:25.035 "assigned_rate_limits": { 00:28:25.035 "rw_ios_per_sec": 0, 00:28:25.035 "rw_mbytes_per_sec": 0, 00:28:25.035 "r_mbytes_per_sec": 0, 00:28:25.035 "w_mbytes_per_sec": 0 00:28:25.035 }, 00:28:25.035 "claimed": false, 00:28:25.035 "zoned": false, 00:28:25.035 "supported_io_types": { 00:28:25.035 "read": true, 00:28:25.035 "write": true, 00:28:25.035 "unmap": true, 00:28:25.035 "write_zeroes": true, 00:28:25.035 "flush": true, 00:28:25.035 "reset": true, 00:28:25.035 "compare": true, 00:28:25.035 "compare_and_write": false, 00:28:25.035 "abort": true, 00:28:25.035 "nvme_admin": false, 00:28:25.035 "nvme_io": false 00:28:25.035 }, 00:28:25.035 "driver_specific": { 00:28:25.035 "gpt": { 00:28:25.035 "base_bdev": "Nvme0n1", 00:28:25.035 "offset_blocks": 256, 00:28:25.035 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:28:25.035 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:25.035 "partition_name": "SPDK_TEST_first" 00:28:25.035 } 00:28:25.035 } 00:28:25.035 } 00:28:25.035 ]' 00:28:25.035 05:08:48 -- bdev/blockdev.sh@620 -- # jq -r length 00:28:25.035 05:08:48 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:28:25.035 05:08:48 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:28:25.035 05:08:48 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:25.035 05:08:48 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:25.035 05:08:48 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:25.035 05:08:48 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:28:25.035 05:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.035 05:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:25.035 05:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.035 05:08:48 -- bdev/blockdev.sh@624 -- # bdev='[ 00:28:25.035 { 00:28:25.035 "name": "Nvme0n1p2", 00:28:25.035 "aliases": [ 00:28:25.035 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:28:25.035 ], 00:28:25.035 "product_name": "GPT Disk", 00:28:25.035 "block_size": 4096, 00:28:25.035 "num_blocks": 655103, 00:28:25.035 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:25.035 "assigned_rate_limits": { 00:28:25.035 "rw_ios_per_sec": 0, 00:28:25.035 "rw_mbytes_per_sec": 0, 00:28:25.035 "r_mbytes_per_sec": 0, 00:28:25.035 "w_mbytes_per_sec": 0 00:28:25.035 }, 00:28:25.035 "claimed": false, 00:28:25.035 "zoned": false, 00:28:25.035 "supported_io_types": { 00:28:25.035 "read": true, 00:28:25.035 "write": true, 00:28:25.035 "unmap": true, 00:28:25.035 "write_zeroes": true, 00:28:25.035 "flush": true, 00:28:25.035 "reset": true, 00:28:25.035 "compare": true, 00:28:25.035 "compare_and_write": false, 00:28:25.035 "abort": true, 00:28:25.035 "nvme_admin": false, 00:28:25.035 "nvme_io": false 00:28:25.035 }, 00:28:25.035 "driver_specific": { 00:28:25.035 "gpt": { 00:28:25.035 "base_bdev": "Nvme0n1", 00:28:25.035 "offset_blocks": 655360, 00:28:25.035 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:28:25.035 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:25.035 "partition_name": "SPDK_TEST_second" 00:28:25.035 } 00:28:25.035 } 00:28:25.035 } 00:28:25.035 ]' 00:28:25.035 05:08:48 -- bdev/blockdev.sh@625 -- # jq -r length 00:28:25.035 05:08:48 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:28:25.035 05:08:48 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:28:25.035 05:08:48 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:25.035 05:08:48 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:25.035 05:08:48 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:25.035 05:08:48 -- bdev/blockdev.sh@629 -- # killprocess 92577 00:28:25.035 05:08:48 -- common/autotest_common.sh@936 -- # '[' -z 92577 ']' 00:28:25.035 05:08:48 -- common/autotest_common.sh@940 -- # kill -0 92577 00:28:25.036 05:08:48 -- common/autotest_common.sh@941 -- # uname 00:28:25.036 05:08:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:25.036 05:08:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92577 00:28:25.036 05:08:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:25.036 05:08:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:25.036 killing process with pid 92577 00:28:25.036 05:08:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92577' 00:28:25.036 05:08:48 -- common/autotest_common.sh@955 -- # kill 92577 00:28:25.036 05:08:48 -- common/autotest_common.sh@960 -- # wait 92577 00:28:26.942 00:28:26.942 real 0m2.944s 00:28:26.942 user 0m3.024s 00:28:26.942 sys 0m0.416s 00:28:26.942 05:08:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:26.942 05:08:50 -- common/autotest_common.sh@10 -- # set +x 00:28:26.942 ************************************ 00:28:26.942 END TEST bdev_gpt_uuid 00:28:26.942 ************************************ 00:28:26.942 05:08:50 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:28:26.942 05:08:50 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:26.942 05:08:50 -- bdev/blockdev.sh@809 -- # cleanup 00:28:26.942 05:08:50 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:26.942 05:08:50 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:26.942 05:08:50 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:28:26.942 05:08:50 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:28:26.942 05:08:50 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:28:26.942 05:08:50 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:27.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:28:27.201 Waiting for block devices as requested 00:28:27.201 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:27.201 05:08:50 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:28:27.201 05:08:50 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:28:27.460 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:28:27.460 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:28:27.460 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:28:27.460 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:28:27.460 05:08:50 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:28:27.460 00:28:27.460 real 0m41.003s 00:28:27.461 user 0m59.405s 00:28:27.461 sys 0m5.641s 00:28:27.461 05:08:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:27.461 05:08:50 -- common/autotest_common.sh@10 -- # set +x 00:28:27.461 ************************************ 00:28:27.461 END TEST blockdev_nvme_gpt 00:28:27.461 ************************************ 00:28:27.720 05:08:50 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:27.720 05:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:27.720 05:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:27.720 05:08:50 -- common/autotest_common.sh@10 -- # set +x 00:28:27.720 ************************************ 00:28:27.720 START TEST nvme 00:28:27.720 ************************************ 00:28:27.720 05:08:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:27.720 * Looking for test storage... 00:28:27.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:27.720 05:08:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:27.720 05:08:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:27.720 05:08:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:27.720 05:08:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:27.720 05:08:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:27.720 05:08:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:27.720 05:08:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:27.720 05:08:51 -- scripts/common.sh@335 -- # IFS=.-: 00:28:27.720 05:08:51 -- scripts/common.sh@335 -- # read -ra ver1 00:28:27.720 05:08:51 -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.720 05:08:51 -- scripts/common.sh@336 -- # read -ra ver2 00:28:27.720 05:08:51 -- scripts/common.sh@337 -- # local 'op=<' 00:28:27.720 05:08:51 -- scripts/common.sh@339 -- # ver1_l=2 00:28:27.720 05:08:51 -- scripts/common.sh@340 -- # ver2_l=1 00:28:27.720 05:08:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:27.720 05:08:51 -- scripts/common.sh@343 -- # case "$op" in 00:28:27.720 05:08:51 -- scripts/common.sh@344 -- # : 1 00:28:27.720 05:08:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:27.720 05:08:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.720 05:08:51 -- scripts/common.sh@364 -- # decimal 1 00:28:27.720 05:08:51 -- scripts/common.sh@352 -- # local d=1 00:28:27.720 05:08:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.720 05:08:51 -- scripts/common.sh@354 -- # echo 1 00:28:27.720 05:08:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:27.720 05:08:51 -- scripts/common.sh@365 -- # decimal 2 00:28:27.720 05:08:51 -- scripts/common.sh@352 -- # local d=2 00:28:27.720 05:08:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.720 05:08:51 -- scripts/common.sh@354 -- # echo 2 00:28:27.720 05:08:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:27.720 05:08:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:27.720 05:08:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:27.720 05:08:51 -- scripts/common.sh@367 -- # return 0 00:28:27.720 05:08:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.720 05:08:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:27.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.720 --rc genhtml_branch_coverage=1 00:28:27.720 --rc genhtml_function_coverage=1 00:28:27.720 --rc genhtml_legend=1 00:28:27.720 --rc geninfo_all_blocks=1 00:28:27.720 --rc geninfo_unexecuted_blocks=1 00:28:27.720 00:28:27.720 ' 00:28:27.720 05:08:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:27.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.720 --rc genhtml_branch_coverage=1 00:28:27.720 --rc genhtml_function_coverage=1 00:28:27.720 --rc genhtml_legend=1 00:28:27.720 --rc geninfo_all_blocks=1 00:28:27.720 --rc geninfo_unexecuted_blocks=1 00:28:27.720 00:28:27.720 ' 00:28:27.720 05:08:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:27.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.720 --rc genhtml_branch_coverage=1 00:28:27.720 --rc genhtml_function_coverage=1 00:28:27.720 --rc genhtml_legend=1 00:28:27.720 --rc geninfo_all_blocks=1 00:28:27.720 --rc geninfo_unexecuted_blocks=1 00:28:27.720 00:28:27.720 ' 00:28:27.720 05:08:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:27.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.720 --rc genhtml_branch_coverage=1 00:28:27.720 --rc genhtml_function_coverage=1 00:28:27.720 --rc genhtml_legend=1 00:28:27.720 --rc geninfo_all_blocks=1 00:28:27.720 --rc geninfo_unexecuted_blocks=1 00:28:27.720 00:28:27.720 ' 00:28:27.720 05:08:51 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:28.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:28:28.289 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:28.857 05:08:52 -- nvme/nvme.sh@79 -- # uname 00:28:28.857 05:08:52 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:28:28.857 05:08:52 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:28:28.857 05:08:52 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:28:28.857 05:08:52 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:28:28.857 05:08:52 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:28:28.857 05:08:52 -- common/autotest_common.sh@1055 -- # echo 0 00:28:28.857 05:08:52 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:28:28.857 05:08:52 -- common/autotest_common.sh@1057 -- # stubpid=92952 00:28:28.857 Waiting for stub to ready for secondary processes... 00:28:28.857 05:08:52 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:28:28.857 05:08:52 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:28.857 05:08:52 -- common/autotest_common.sh@1061 -- # [[ -e /proc/92952 ]] 00:28:28.857 05:08:52 -- common/autotest_common.sh@1062 -- # sleep 1s 00:28:28.857 [2024-11-18 05:08:52.298683] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:28.857 [2024-11-18 05:08:52.298822] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.793 [2024-11-18 05:08:53.025587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:29.793 [2024-11-18 05:08:53.228883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.793 [2024-11-18 05:08:53.229003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.793 [2024-11-18 05:08:53.229027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:29.793 [2024-11-18 05:08:53.243779] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:28:29.793 [2024-11-18 05:08:53.253765] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:28:29.793 [2024-11-18 05:08:53.254000] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:28:29.793 05:08:53 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:29.793 done. 00:28:29.793 05:08:53 -- common/autotest_common.sh@1064 -- # echo done. 00:28:29.793 05:08:53 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:29.793 05:08:53 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:28:29.793 05:08:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:29.793 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:29.793 ************************************ 00:28:29.793 START TEST nvme_reset 00:28:29.793 ************************************ 00:28:29.793 05:08:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:30.052 Initializing NVMe Controllers 00:28:30.052 Skipping QEMU NVMe SSD at 0000:00:06.0 00:28:30.052 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:28:30.052 00:28:30.052 real 0m0.291s 00:28:30.052 user 0m0.098s 00:28:30.052 sys 0m0.157s 00:28:30.312 05:08:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:30.312 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:30.312 ************************************ 00:28:30.312 END TEST nvme_reset 00:28:30.312 ************************************ 00:28:30.312 05:08:53 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:28:30.312 05:08:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:30.312 05:08:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:30.312 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:30.312 ************************************ 00:28:30.312 START TEST nvme_identify 00:28:30.312 ************************************ 00:28:30.312 05:08:53 -- common/autotest_common.sh@1114 -- # nvme_identify 00:28:30.312 05:08:53 -- nvme/nvme.sh@12 -- # bdfs=() 00:28:30.312 05:08:53 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:28:30.312 05:08:53 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:28:30.312 05:08:53 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:28:30.312 05:08:53 -- common/autotest_common.sh@1508 -- # bdfs=() 00:28:30.312 05:08:53 -- common/autotest_common.sh@1508 -- # local bdfs 00:28:30.312 05:08:53 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:30.312 05:08:53 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:30.312 05:08:53 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:28:30.312 05:08:53 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:28:30.312 05:08:53 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:28:30.312 05:08:53 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:28:30.571 [2024-11-18 05:08:53.932036] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 92975 terminated unexpected 00:28:30.571 ===================================================== 00:28:30.571 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:30.571 ===================================================== 00:28:30.571 Controller Capabilities/Features 00:28:30.571 ================================ 00:28:30.571 Vendor ID: 1b36 00:28:30.571 Subsystem Vendor ID: 1af4 00:28:30.571 Serial Number: 12340 00:28:30.571 Model Number: QEMU NVMe Ctrl 00:28:30.571 Firmware Version: 8.0.0 00:28:30.571 Recommended Arb Burst: 6 00:28:30.571 IEEE OUI Identifier: 00 54 52 00:28:30.571 Multi-path I/O 00:28:30.571 May have multiple subsystem ports: No 00:28:30.571 May have multiple controllers: No 00:28:30.571 Associated with SR-IOV VF: No 00:28:30.571 Max Data Transfer Size: 524288 00:28:30.571 Max Number of Namespaces: 256 00:28:30.571 Max Number of I/O Queues: 64 00:28:30.571 NVMe Specification Version (VS): 1.4 00:28:30.571 NVMe Specification Version (Identify): 1.4 00:28:30.571 Maximum Queue Entries: 2048 00:28:30.571 Contiguous Queues Required: Yes 00:28:30.571 Arbitration Mechanisms Supported 00:28:30.571 Weighted Round Robin: Not Supported 00:28:30.571 Vendor Specific: Not Supported 00:28:30.571 Reset Timeout: 7500 ms 00:28:30.571 Doorbell Stride: 4 bytes 00:28:30.571 NVM Subsystem Reset: Not Supported 00:28:30.571 Command Sets Supported 00:28:30.571 NVM Command Set: Supported 00:28:30.571 Boot Partition: Not Supported 00:28:30.571 Memory Page Size Minimum: 4096 bytes 00:28:30.571 Memory Page Size Maximum: 65536 bytes 00:28:30.571 Persistent Memory Region: Not Supported 00:28:30.571 Optional Asynchronous Events Supported 00:28:30.571 Namespace Attribute Notices: Supported 00:28:30.571 Firmware Activation Notices: Not Supported 00:28:30.571 ANA Change Notices: Not Supported 00:28:30.571 PLE Aggregate Log Change Notices: Not Supported 00:28:30.571 LBA Status Info Alert Notices: Not Supported 00:28:30.571 EGE Aggregate Log Change Notices: Not Supported 00:28:30.571 Normal NVM Subsystem Shutdown event: Not Supported 00:28:30.571 Zone Descriptor Change Notices: Not Supported 00:28:30.571 Discovery Log Change Notices: Not Supported 00:28:30.571 Controller Attributes 00:28:30.571 128-bit Host Identifier: Not Supported 00:28:30.571 Non-Operational Permissive Mode: Not Supported 00:28:30.571 NVM Sets: Not Supported 00:28:30.571 Read Recovery Levels: Not Supported 00:28:30.571 Endurance Groups: Not Supported 00:28:30.571 Predictable Latency Mode: Not Supported 00:28:30.571 Traffic Based Keep ALive: Not Supported 00:28:30.571 Namespace Granularity: Not Supported 00:28:30.571 SQ Associations: Not Supported 00:28:30.571 UUID List: Not Supported 00:28:30.571 Multi-Domain Subsystem: Not Supported 00:28:30.571 Fixed Capacity Management: Not Supported 00:28:30.571 Variable Capacity Management: Not Supported 00:28:30.571 Delete Endurance Group: Not Supported 00:28:30.571 Delete NVM Set: Not Supported 00:28:30.571 Extended LBA Formats Supported: Supported 00:28:30.571 Flexible Data Placement Supported: Not Supported 00:28:30.571 00:28:30.571 Controller Memory Buffer Support 00:28:30.571 ================================ 00:28:30.571 Supported: No 00:28:30.571 00:28:30.571 Persistent Memory Region Support 00:28:30.571 ================================ 00:28:30.571 Supported: No 00:28:30.571 00:28:30.571 Admin Command Set Attributes 00:28:30.571 ============================ 00:28:30.571 Security Send/Receive: Not Supported 00:28:30.571 Format NVM: Supported 00:28:30.571 Firmware Activate/Download: Not Supported 00:28:30.571 Namespace Management: Supported 00:28:30.571 Device Self-Test: Not Supported 00:28:30.572 Directives: Supported 00:28:30.572 NVMe-MI: Not Supported 00:28:30.572 Virtualization Management: Not Supported 00:28:30.572 Doorbell Buffer Config: Supported 00:28:30.572 Get LBA Status Capability: Not Supported 00:28:30.572 Command & Feature Lockdown Capability: Not Supported 00:28:30.572 Abort Command Limit: 4 00:28:30.572 Async Event Request Limit: 4 00:28:30.572 Number of Firmware Slots: N/A 00:28:30.572 Firmware Slot 1 Read-Only: N/A 00:28:30.572 Firmware Activation Without Reset: N/A 00:28:30.572 Multiple Update Detection Support: N/A 00:28:30.572 Firmware Update Granularity: No Information Provided 00:28:30.572 Per-Namespace SMART Log: Yes 00:28:30.572 Asymmetric Namespace Access Log Page: Not Supported 00:28:30.572 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:28:30.572 Command Effects Log Page: Supported 00:28:30.572 Get Log Page Extended Data: Supported 00:28:30.572 Telemetry Log Pages: Not Supported 00:28:30.572 Persistent Event Log Pages: Not Supported 00:28:30.572 Supported Log Pages Log Page: May Support 00:28:30.572 Commands Supported & Effects Log Page: Not Supported 00:28:30.572 Feature Identifiers & Effects Log Page:May Support 00:28:30.572 NVMe-MI Commands & Effects Log Page: May Support 00:28:30.572 Data Area 4 for Telemetry Log: Not Supported 00:28:30.572 Error Log Page Entries Supported: 1 00:28:30.572 Keep Alive: Not Supported 00:28:30.572 00:28:30.572 NVM Command Set Attributes 00:28:30.572 ========================== 00:28:30.572 Submission Queue Entry Size 00:28:30.572 Max: 64 00:28:30.572 Min: 64 00:28:30.572 Completion Queue Entry Size 00:28:30.572 Max: 16 00:28:30.572 Min: 16 00:28:30.572 Number of Namespaces: 256 00:28:30.572 Compare Command: Supported 00:28:30.572 Write Uncorrectable Command: Not Supported 00:28:30.572 Dataset Management Command: Supported 00:28:30.572 Write Zeroes Command: Supported 00:28:30.572 Set Features Save Field: Supported 00:28:30.572 Reservations: Not Supported 00:28:30.572 Timestamp: Supported 00:28:30.572 Copy: Supported 00:28:30.572 Volatile Write Cache: Present 00:28:30.572 Atomic Write Unit (Normal): 1 00:28:30.572 Atomic Write Unit (PFail): 1 00:28:30.572 Atomic Compare & Write Unit: 1 00:28:30.572 Fused Compare & Write: Not Supported 00:28:30.572 Scatter-Gather List 00:28:30.572 SGL Command Set: Supported 00:28:30.572 SGL Keyed: Not Supported 00:28:30.572 SGL Bit Bucket Descriptor: Not Supported 00:28:30.572 SGL Metadata Pointer: Not Supported 00:28:30.572 Oversized SGL: Not Supported 00:28:30.572 SGL Metadata Address: Not Supported 00:28:30.572 SGL Offset: Not Supported 00:28:30.572 Transport SGL Data Block: Not Supported 00:28:30.572 Replay Protected Memory Block: Not Supported 00:28:30.572 00:28:30.572 Firmware Slot Information 00:28:30.572 ========================= 00:28:30.572 Active slot: 1 00:28:30.572 Slot 1 Firmware Revision: 1.0 00:28:30.572 00:28:30.572 00:28:30.572 Commands Supported and Effects 00:28:30.572 ============================== 00:28:30.572 Admin Commands 00:28:30.572 -------------- 00:28:30.572 Delete I/O Submission Queue (00h): Supported 00:28:30.572 Create I/O Submission Queue (01h): Supported 00:28:30.572 Get Log Page (02h): Supported 00:28:30.572 Delete I/O Completion Queue (04h): Supported 00:28:30.572 Create I/O Completion Queue (05h): Supported 00:28:30.572 Identify (06h): Supported 00:28:30.572 Abort (08h): Supported 00:28:30.572 Set Features (09h): Supported 00:28:30.572 Get Features (0Ah): Supported 00:28:30.572 Asynchronous Event Request (0Ch): Supported 00:28:30.572 Namespace Attachment (15h): Supported NS-Inventory-Change 00:28:30.572 Directive Send (19h): Supported 00:28:30.572 Directive Receive (1Ah): Supported 00:28:30.572 Virtualization Management (1Ch): Supported 00:28:30.572 Doorbell Buffer Config (7Ch): Supported 00:28:30.572 Format NVM (80h): Supported LBA-Change 00:28:30.572 I/O Commands 00:28:30.572 ------------ 00:28:30.572 Flush (00h): Supported LBA-Change 00:28:30.572 Write (01h): Supported LBA-Change 00:28:30.572 Read (02h): Supported 00:28:30.572 Compare (05h): Supported 00:28:30.572 Write Zeroes (08h): Supported LBA-Change 00:28:30.572 Dataset Management (09h): Supported LBA-Change 00:28:30.572 Unknown (0Ch): Supported 00:28:30.572 Unknown (12h): Supported 00:28:30.572 Copy (19h): Supported LBA-Change 00:28:30.572 Unknown (1Dh): Supported LBA-Change 00:28:30.572 00:28:30.572 Error Log 00:28:30.572 ========= 00:28:30.572 00:28:30.572 Arbitration 00:28:30.572 =========== 00:28:30.572 Arbitration Burst: no limit 00:28:30.572 00:28:30.572 Power Management 00:28:30.572 ================ 00:28:30.572 Number of Power States: 1 00:28:30.572 Current Power State: Power State #0 00:28:30.572 Power State #0: 00:28:30.572 Max Power: 25.00 W 00:28:30.572 Non-Operational State: Operational 00:28:30.572 Entry Latency: 16 microseconds 00:28:30.572 Exit Latency: 4 microseconds 00:28:30.572 Relative Read Throughput: 0 00:28:30.572 Relative Read Latency: 0 00:28:30.572 Relative Write Throughput: 0 00:28:30.572 Relative Write Latency: 0 00:28:30.572 Idle Power: Not Reported 00:28:30.572 Active Power: Not Reported 00:28:30.572 Non-Operational Permissive Mode: Not Supported 00:28:30.572 00:28:30.572 Health Information 00:28:30.572 ================== 00:28:30.572 Critical Warnings: 00:28:30.572 Available Spare Space: OK 00:28:30.572 Temperature: OK 00:28:30.572 Device Reliability: OK 00:28:30.572 Read Only: No 00:28:30.572 Volatile Memory Backup: OK 00:28:30.572 Current Temperature: 323 Kelvin (50 Celsius) 00:28:30.572 Temperature Threshold: 343 Kelvin (70 Celsius) 00:28:30.572 Available Spare: 0% 00:28:30.572 Available Spare Threshold: 0% 00:28:30.572 Life Percentage Used: 0% 00:28:30.572 Data Units Read: 7671 00:28:30.572 Data Units Written: 3720 00:28:30.572 Host Read Commands: 367023 00:28:30.572 Host Write Commands: 198479 00:28:30.572 Controller Busy Time: 0 minutes 00:28:30.572 Power Cycles: 0 00:28:30.572 Power On Hours: 0 hours 00:28:30.572 Unsafe Shutdowns: 0 00:28:30.572 Unrecoverable Media Errors: 0 00:28:30.572 Lifetime Error Log Entries: 0 00:28:30.572 Warning Temperature Time: 0 minutes 00:28:30.572 Critical Temperature Time: 0 minutes 00:28:30.572 00:28:30.572 Number of Queues 00:28:30.572 ================ 00:28:30.572 Number of I/O Submission Queues: 64 00:28:30.572 Number of I/O Completion Queues: 64 00:28:30.572 00:28:30.572 ZNS Specific Controller Data 00:28:30.572 ============================ 00:28:30.572 Zone Append Size Limit: 0 00:28:30.572 00:28:30.572 00:28:30.572 Active Namespaces 00:28:30.572 ================= 00:28:30.572 Namespace ID:1 00:28:30.572 Error Recovery Timeout: Unlimited 00:28:30.572 Command Set Identifier: NVM (00h) 00:28:30.572 Deallocate: Supported 00:28:30.572 Deallocated/Unwritten Error: Supported 00:28:30.572 Deallocated Read Value: All 0x00 00:28:30.572 Deallocate in Write Zeroes: Not Supported 00:28:30.572 Deallocated Guard Field: 0xFFFF 00:28:30.572 Flush: Supported 00:28:30.572 Reservation: Not Supported 00:28:30.572 Namespace Sharing Capabilities: Private 00:28:30.572 Size (in LBAs): 1310720 (5GiB) 00:28:30.572 Capacity (in LBAs): 1310720 (5GiB) 00:28:30.572 Utilization (in LBAs): 1310720 (5GiB) 00:28:30.572 Thin Provisioning: Not Supported 00:28:30.572 Per-NS Atomic Units: No 00:28:30.572 Maximum Single Source Range Length: 128 00:28:30.572 Maximum Copy Length: 128 00:28:30.572 Maximum Source Range Count: 128 00:28:30.572 NGUID/EUI64 Never Reused: No 00:28:30.572 Namespace Write Protected: No 00:28:30.572 Number of LBA Formats: 8 00:28:30.572 Current LBA Format: LBA Format #04 00:28:30.572 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:30.572 LBA Format #01: Data Size: 512 Metadata Size: 8 00:28:30.572 LBA Format #02: Data Size: 512 Metadata Size: 16 00:28:30.572 LBA Format #03: Data Size: 512 Metadata Size: 64 00:28:30.572 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:28:30.572 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:28:30.572 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:28:30.572 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:28:30.572 00:28:30.572 05:08:53 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:28:30.572 05:08:53 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:28:30.832 ===================================================== 00:28:30.832 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:30.832 ===================================================== 00:28:30.832 Controller Capabilities/Features 00:28:30.832 ================================ 00:28:30.832 Vendor ID: 1b36 00:28:30.832 Subsystem Vendor ID: 1af4 00:28:30.832 Serial Number: 12340 00:28:30.832 Model Number: QEMU NVMe Ctrl 00:28:30.832 Firmware Version: 8.0.0 00:28:30.832 Recommended Arb Burst: 6 00:28:30.832 IEEE OUI Identifier: 00 54 52 00:28:30.832 Multi-path I/O 00:28:30.832 May have multiple subsystem ports: No 00:28:30.832 May have multiple controllers: No 00:28:30.832 Associated with SR-IOV VF: No 00:28:30.832 Max Data Transfer Size: 524288 00:28:30.832 Max Number of Namespaces: 256 00:28:30.832 Max Number of I/O Queues: 64 00:28:30.832 NVMe Specification Version (VS): 1.4 00:28:30.832 NVMe Specification Version (Identify): 1.4 00:28:30.832 Maximum Queue Entries: 2048 00:28:30.832 Contiguous Queues Required: Yes 00:28:30.832 Arbitration Mechanisms Supported 00:28:30.832 Weighted Round Robin: Not Supported 00:28:30.832 Vendor Specific: Not Supported 00:28:30.832 Reset Timeout: 7500 ms 00:28:30.832 Doorbell Stride: 4 bytes 00:28:30.832 NVM Subsystem Reset: Not Supported 00:28:30.832 Command Sets Supported 00:28:30.832 NVM Command Set: Supported 00:28:30.832 Boot Partition: Not Supported 00:28:30.832 Memory Page Size Minimum: 4096 bytes 00:28:30.832 Memory Page Size Maximum: 65536 bytes 00:28:30.832 Persistent Memory Region: Not Supported 00:28:30.832 Optional Asynchronous Events Supported 00:28:30.832 Namespace Attribute Notices: Supported 00:28:30.832 Firmware Activation Notices: Not Supported 00:28:30.832 ANA Change Notices: Not Supported 00:28:30.832 PLE Aggregate Log Change Notices: Not Supported 00:28:30.832 LBA Status Info Alert Notices: Not Supported 00:28:30.832 EGE Aggregate Log Change Notices: Not Supported 00:28:30.832 Normal NVM Subsystem Shutdown event: Not Supported 00:28:30.832 Zone Descriptor Change Notices: Not Supported 00:28:30.832 Discovery Log Change Notices: Not Supported 00:28:30.832 Controller Attributes 00:28:30.832 128-bit Host Identifier: Not Supported 00:28:30.832 Non-Operational Permissive Mode: Not Supported 00:28:30.832 NVM Sets: Not Supported 00:28:30.832 Read Recovery Levels: Not Supported 00:28:30.832 Endurance Groups: Not Supported 00:28:30.832 Predictable Latency Mode: Not Supported 00:28:30.832 Traffic Based Keep ALive: Not Supported 00:28:30.832 Namespace Granularity: Not Supported 00:28:30.832 SQ Associations: Not Supported 00:28:30.832 UUID List: Not Supported 00:28:30.832 Multi-Domain Subsystem: Not Supported 00:28:30.832 Fixed Capacity Management: Not Supported 00:28:30.832 Variable Capacity Management: Not Supported 00:28:30.832 Delete Endurance Group: Not Supported 00:28:30.832 Delete NVM Set: Not Supported 00:28:30.832 Extended LBA Formats Supported: Supported 00:28:30.832 Flexible Data Placement Supported: Not Supported 00:28:30.832 00:28:30.832 Controller Memory Buffer Support 00:28:30.832 ================================ 00:28:30.832 Supported: No 00:28:30.832 00:28:30.832 Persistent Memory Region Support 00:28:30.832 ================================ 00:28:30.832 Supported: No 00:28:30.832 00:28:30.832 Admin Command Set Attributes 00:28:30.832 ============================ 00:28:30.832 Security Send/Receive: Not Supported 00:28:30.832 Format NVM: Supported 00:28:30.832 Firmware Activate/Download: Not Supported 00:28:30.832 Namespace Management: Supported 00:28:30.832 Device Self-Test: Not Supported 00:28:30.832 Directives: Supported 00:28:30.832 NVMe-MI: Not Supported 00:28:30.832 Virtualization Management: Not Supported 00:28:30.832 Doorbell Buffer Config: Supported 00:28:30.832 Get LBA Status Capability: Not Supported 00:28:30.833 Command & Feature Lockdown Capability: Not Supported 00:28:30.833 Abort Command Limit: 4 00:28:30.833 Async Event Request Limit: 4 00:28:30.833 Number of Firmware Slots: N/A 00:28:30.833 Firmware Slot 1 Read-Only: N/A 00:28:30.833 Firmware Activation Without Reset: N/A 00:28:30.833 Multiple Update Detection Support: N/A 00:28:30.833 Firmware Update Granularity: No Information Provided 00:28:30.833 Per-Namespace SMART Log: Yes 00:28:30.833 Asymmetric Namespace Access Log Page: Not Supported 00:28:30.833 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:28:30.833 Command Effects Log Page: Supported 00:28:30.833 Get Log Page Extended Data: Supported 00:28:30.833 Telemetry Log Pages: Not Supported 00:28:30.833 Persistent Event Log Pages: Not Supported 00:28:30.833 Supported Log Pages Log Page: May Support 00:28:30.833 Commands Supported & Effects Log Page: Not Supported 00:28:30.833 Feature Identifiers & Effects Log Page:May Support 00:28:30.833 NVMe-MI Commands & Effects Log Page: May Support 00:28:30.833 Data Area 4 for Telemetry Log: Not Supported 00:28:30.833 Error Log Page Entries Supported: 1 00:28:30.833 Keep Alive: Not Supported 00:28:30.833 00:28:30.833 NVM Command Set Attributes 00:28:30.833 ========================== 00:28:30.833 Submission Queue Entry Size 00:28:30.833 Max: 64 00:28:30.833 Min: 64 00:28:30.833 Completion Queue Entry Size 00:28:30.833 Max: 16 00:28:30.833 Min: 16 00:28:30.833 Number of Namespaces: 256 00:28:30.833 Compare Command: Supported 00:28:30.833 Write Uncorrectable Command: Not Supported 00:28:30.833 Dataset Management Command: Supported 00:28:30.833 Write Zeroes Command: Supported 00:28:30.833 Set Features Save Field: Supported 00:28:30.833 Reservations: Not Supported 00:28:30.833 Timestamp: Supported 00:28:30.833 Copy: Supported 00:28:30.833 Volatile Write Cache: Present 00:28:30.833 Atomic Write Unit (Normal): 1 00:28:30.833 Atomic Write Unit (PFail): 1 00:28:30.833 Atomic Compare & Write Unit: 1 00:28:30.833 Fused Compare & Write: Not Supported 00:28:30.833 Scatter-Gather List 00:28:30.833 SGL Command Set: Supported 00:28:30.833 SGL Keyed: Not Supported 00:28:30.833 SGL Bit Bucket Descriptor: Not Supported 00:28:30.833 SGL Metadata Pointer: Not Supported 00:28:30.833 Oversized SGL: Not Supported 00:28:30.833 SGL Metadata Address: Not Supported 00:28:30.833 SGL Offset: Not Supported 00:28:30.833 Transport SGL Data Block: Not Supported 00:28:30.833 Replay Protected Memory Block: Not Supported 00:28:30.833 00:28:30.833 Firmware Slot Information 00:28:30.833 ========================= 00:28:30.833 Active slot: 1 00:28:30.833 Slot 1 Firmware Revision: 1.0 00:28:30.833 00:28:30.833 00:28:30.833 Commands Supported and Effects 00:28:30.833 ============================== 00:28:30.833 Admin Commands 00:28:30.833 -------------- 00:28:30.833 Delete I/O Submission Queue (00h): Supported 00:28:30.833 Create I/O Submission Queue (01h): Supported 00:28:30.833 Get Log Page (02h): Supported 00:28:30.833 Delete I/O Completion Queue (04h): Supported 00:28:30.833 Create I/O Completion Queue (05h): Supported 00:28:30.833 Identify (06h): Supported 00:28:30.833 Abort (08h): Supported 00:28:30.833 Set Features (09h): Supported 00:28:30.833 Get Features (0Ah): Supported 00:28:30.833 Asynchronous Event Request (0Ch): Supported 00:28:30.833 Namespace Attachment (15h): Supported NS-Inventory-Change 00:28:30.833 Directive Send (19h): Supported 00:28:30.833 Directive Receive (1Ah): Supported 00:28:30.833 Virtualization Management (1Ch): Supported 00:28:30.833 Doorbell Buffer Config (7Ch): Supported 00:28:30.833 Format NVM (80h): Supported LBA-Change 00:28:30.833 I/O Commands 00:28:30.833 ------------ 00:28:30.833 Flush (00h): Supported LBA-Change 00:28:30.833 Write (01h): Supported LBA-Change 00:28:30.833 Read (02h): Supported 00:28:30.833 Compare (05h): Supported 00:28:30.833 Write Zeroes (08h): Supported LBA-Change 00:28:30.833 Dataset Management (09h): Supported LBA-Change 00:28:30.833 Unknown (0Ch): Supported 00:28:30.833 Unknown (12h): Supported 00:28:30.833 Copy (19h): Supported LBA-Change 00:28:30.833 Unknown (1Dh): Supported LBA-Change 00:28:30.833 00:28:30.833 Error Log 00:28:30.833 ========= 00:28:30.833 00:28:30.833 Arbitration 00:28:30.833 =========== 00:28:30.833 Arbitration Burst: no limit 00:28:30.833 00:28:30.833 Power Management 00:28:30.833 ================ 00:28:30.833 Number of Power States: 1 00:28:30.833 Current Power State: Power State #0 00:28:30.833 Power State #0: 00:28:30.833 Max Power: 25.00 W 00:28:30.833 Non-Operational State: Operational 00:28:30.833 Entry Latency: 16 microseconds 00:28:30.833 Exit Latency: 4 microseconds 00:28:30.833 Relative Read Throughput: 0 00:28:30.833 Relative Read Latency: 0 00:28:30.833 Relative Write Throughput: 0 00:28:30.833 Relative Write Latency: 0 00:28:30.833 Idle Power: Not Reported 00:28:30.833 Active Power: Not Reported 00:28:30.833 Non-Operational Permissive Mode: Not Supported 00:28:30.833 00:28:30.833 Health Information 00:28:30.833 ================== 00:28:30.833 Critical Warnings: 00:28:30.833 Available Spare Space: OK 00:28:30.833 Temperature: OK 00:28:30.833 Device Reliability: OK 00:28:30.833 Read Only: No 00:28:30.833 Volatile Memory Backup: OK 00:28:30.833 Current Temperature: 323 Kelvin (50 Celsius) 00:28:30.833 Temperature Threshold: 343 Kelvin (70 Celsius) 00:28:30.833 Available Spare: 0% 00:28:30.833 Available Spare Threshold: 0% 00:28:30.833 Life Percentage Used: 0% 00:28:30.833 Data Units Read: 7671 00:28:30.833 Data Units Written: 3720 00:28:30.833 Host Read Commands: 367023 00:28:30.833 Host Write Commands: 198479 00:28:30.833 Controller Busy Time: 0 minutes 00:28:30.833 Power Cycles: 0 00:28:30.833 Power On Hours: 0 hours 00:28:30.833 Unsafe Shutdowns: 0 00:28:30.833 Unrecoverable Media Errors: 0 00:28:30.833 Lifetime Error Log Entries: 0 00:28:30.833 Warning Temperature Time: 0 minutes 00:28:30.833 Critical Temperature Time: 0 minutes 00:28:30.833 00:28:30.833 Number of Queues 00:28:30.833 ================ 00:28:30.833 Number of I/O Submission Queues: 64 00:28:30.833 Number of I/O Completion Queues: 64 00:28:30.833 00:28:30.833 ZNS Specific Controller Data 00:28:30.833 ============================ 00:28:30.833 Zone Append Size Limit: 0 00:28:30.833 00:28:30.833 00:28:30.833 Active Namespaces 00:28:30.833 ================= 00:28:30.833 Namespace ID:1 00:28:30.833 Error Recovery Timeout: Unlimited 00:28:30.833 Command Set Identifier: NVM (00h) 00:28:30.833 Deallocate: Supported 00:28:30.833 Deallocated/Unwritten Error: Supported 00:28:30.833 Deallocated Read Value: All 0x00 00:28:30.833 Deallocate in Write Zeroes: Not Supported 00:28:30.833 Deallocated Guard Field: 0xFFFF 00:28:30.833 Flush: Supported 00:28:30.833 Reservation: Not Supported 00:28:30.833 Namespace Sharing Capabilities: Private 00:28:30.833 Size (in LBAs): 1310720 (5GiB) 00:28:30.833 Capacity (in LBAs): 1310720 (5GiB) 00:28:30.833 Utilization (in LBAs): 1310720 (5GiB) 00:28:30.833 Thin Provisioning: Not Supported 00:28:30.833 Per-NS Atomic Units: No 00:28:30.833 Maximum Single Source Range Length: 128 00:28:30.833 Maximum Copy Length: 128 00:28:30.833 Maximum Source Range Count: 128 00:28:30.833 NGUID/EUI64 Never Reused: No 00:28:30.833 Namespace Write Protected: No 00:28:30.833 Number of LBA Formats: 8 00:28:30.833 Current LBA Format: LBA Format #04 00:28:30.833 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:30.833 LBA Format #01: Data Size: 512 Metadata Size: 8 00:28:30.833 LBA Format #02: Data Size: 512 Metadata Size: 16 00:28:30.833 LBA Format #03: Data Size: 512 Metadata Size: 64 00:28:30.833 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:28:30.833 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:28:30.833 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:28:30.833 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:28:30.833 00:28:30.833 00:28:30.833 real 0m0.680s 00:28:30.833 user 0m0.238s 00:28:30.833 sys 0m0.356s 00:28:30.833 05:08:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:30.833 05:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:30.833 ************************************ 00:28:30.833 END TEST nvme_identify 00:28:30.833 ************************************ 00:28:30.833 05:08:54 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:28:30.833 05:08:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:30.833 05:08:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:30.833 05:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:31.092 ************************************ 00:28:31.092 START TEST nvme_perf 00:28:31.092 ************************************ 00:28:31.092 05:08:54 -- common/autotest_common.sh@1114 -- # nvme_perf 00:28:31.092 05:08:54 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:28:32.471 Initializing NVMe Controllers 00:28:32.471 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:32.471 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:32.471 Initialization complete. Launching workers. 00:28:32.471 ======================================================== 00:28:32.471 Latency(us) 00:28:32.471 Device Information : IOPS MiB/s Average min max 00:28:32.471 PCIE (0000:00:06.0) NSID 1 from core 0: 58833.93 689.46 2176.36 1178.00 6552.64 00:28:32.471 ======================================================== 00:28:32.471 Total : 58833.93 689.46 2176.36 1178.00 6552.64 00:28:32.471 00:28:32.471 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:32.471 ================================================================================= 00:28:32.471 1.00000% : 1295.825us 00:28:32.471 10.00000% : 1489.455us 00:28:32.471 25.00000% : 1735.215us 00:28:32.471 50.00000% : 2159.709us 00:28:32.471 75.00000% : 2576.756us 00:28:32.471 90.00000% : 2844.858us 00:28:32.471 95.00000% : 3083.171us 00:28:32.471 98.00000% : 3351.273us 00:28:32.471 99.00000% : 3440.640us 00:28:32.471 99.50000% : 3544.902us 00:28:32.471 99.90000% : 4766.255us 00:28:32.471 99.99000% : 6374.865us 00:28:32.471 99.99900% : 6553.600us 00:28:32.471 99.99990% : 6553.600us 00:28:32.471 99.99999% : 6553.600us 00:28:32.471 00:28:32.471 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:32.471 ============================================================================== 00:28:32.471 Range in us Cumulative IO count 00:28:32.471 1176.669 - 1184.116: 0.0034% ( 2) 00:28:32.471 1184.116 - 1191.564: 0.0136% ( 6) 00:28:32.471 1191.564 - 1199.011: 0.0255% ( 7) 00:28:32.471 1199.011 - 1206.458: 0.0374% ( 7) 00:28:32.471 1206.458 - 1213.905: 0.0594% ( 13) 00:28:32.471 1213.905 - 1221.353: 0.0866% ( 16) 00:28:32.471 1221.353 - 1228.800: 0.1223% ( 21) 00:28:32.471 1228.800 - 1236.247: 0.1681% ( 27) 00:28:32.471 1236.247 - 1243.695: 0.2208% ( 31) 00:28:32.471 1243.695 - 1251.142: 0.2904% ( 41) 00:28:32.471 1251.142 - 1258.589: 0.3770% ( 51) 00:28:32.471 1258.589 - 1266.036: 0.4772% ( 59) 00:28:32.471 1266.036 - 1273.484: 0.6029% ( 74) 00:28:32.471 1273.484 - 1280.931: 0.7337% ( 77) 00:28:32.471 1280.931 - 1288.378: 0.9171% ( 108) 00:28:32.471 1288.378 - 1295.825: 1.1005% ( 108) 00:28:32.471 1295.825 - 1303.273: 1.2857% ( 109) 00:28:32.471 1303.273 - 1310.720: 1.5268% ( 142) 00:28:32.471 1310.720 - 1318.167: 1.7663% ( 141) 00:28:32.471 1318.167 - 1325.615: 1.9990% ( 137) 00:28:32.471 1325.615 - 1333.062: 2.2843% ( 168) 00:28:32.471 1333.062 - 1340.509: 2.5679% ( 167) 00:28:32.471 1340.509 - 1347.956: 2.8821% ( 185) 00:28:32.471 1347.956 - 1355.404: 3.2116% ( 194) 00:28:32.471 1355.404 - 1362.851: 3.5105% ( 176) 00:28:32.471 1362.851 - 1370.298: 3.8740% ( 214) 00:28:32.471 1370.298 - 1377.745: 4.1967% ( 190) 00:28:32.471 1377.745 - 1385.193: 4.5635% ( 216) 00:28:32.471 1385.193 - 1392.640: 4.9575% ( 232) 00:28:32.471 1392.640 - 1400.087: 5.3176% ( 212) 00:28:32.471 1400.087 - 1407.535: 5.7167% ( 235) 00:28:32.471 1407.535 - 1414.982: 6.1192% ( 237) 00:28:32.471 1414.982 - 1422.429: 6.4963% ( 222) 00:28:32.471 1422.429 - 1429.876: 6.9209% ( 250) 00:28:32.471 1429.876 - 1437.324: 7.3166% ( 233) 00:28:32.471 1437.324 - 1444.771: 7.7531% ( 257) 00:28:32.471 1444.771 - 1452.218: 8.1743% ( 248) 00:28:32.471 1452.218 - 1459.665: 8.5649% ( 230) 00:28:32.471 1459.665 - 1467.113: 9.0319% ( 275) 00:28:32.471 1467.113 - 1474.560: 9.4616% ( 253) 00:28:32.471 1474.560 - 1482.007: 9.8794% ( 246) 00:28:32.471 1482.007 - 1489.455: 10.3397% ( 271) 00:28:32.471 1489.455 - 1496.902: 10.7660% ( 251) 00:28:32.471 1496.902 - 1504.349: 11.2313% ( 274) 00:28:32.472 1504.349 - 1511.796: 11.6525% ( 248) 00:28:32.472 1511.796 - 1519.244: 12.1111% ( 270) 00:28:32.472 1519.244 - 1526.691: 12.5543% ( 261) 00:28:32.472 1526.691 - 1534.138: 12.9806% ( 251) 00:28:32.472 1534.138 - 1541.585: 13.4375% ( 269) 00:28:32.472 1541.585 - 1549.033: 13.8876% ( 265) 00:28:32.472 1549.033 - 1556.480: 14.3122% ( 250) 00:28:32.472 1556.480 - 1563.927: 14.7724% ( 271) 00:28:32.472 1563.927 - 1571.375: 15.2140% ( 260) 00:28:32.472 1571.375 - 1578.822: 15.6539% ( 259) 00:28:32.472 1578.822 - 1586.269: 16.0751% ( 248) 00:28:32.472 1586.269 - 1593.716: 16.5319% ( 269) 00:28:32.472 1593.716 - 1601.164: 16.9803% ( 264) 00:28:32.472 1601.164 - 1608.611: 17.4134% ( 255) 00:28:32.472 1608.611 - 1616.058: 17.8601% ( 263) 00:28:32.472 1616.058 - 1623.505: 18.3050% ( 262) 00:28:32.472 1623.505 - 1630.953: 18.7551% ( 265) 00:28:32.472 1630.953 - 1638.400: 19.2086% ( 267) 00:28:32.472 1638.400 - 1645.847: 19.6501% ( 260) 00:28:32.472 1645.847 - 1653.295: 20.1036% ( 267) 00:28:32.472 1653.295 - 1660.742: 20.5571% ( 267) 00:28:32.472 1660.742 - 1668.189: 20.9901% ( 255) 00:28:32.472 1668.189 - 1675.636: 21.4487% ( 270) 00:28:32.472 1675.636 - 1683.084: 21.9107% ( 272) 00:28:32.472 1683.084 - 1690.531: 22.3454% ( 256) 00:28:32.472 1690.531 - 1697.978: 22.7836% ( 258) 00:28:32.472 1697.978 - 1705.425: 23.2286% ( 262) 00:28:32.472 1705.425 - 1712.873: 23.6668% ( 258) 00:28:32.472 1712.873 - 1720.320: 24.1457% ( 282) 00:28:32.472 1720.320 - 1727.767: 24.5533% ( 240) 00:28:32.472 1727.767 - 1735.215: 25.0153% ( 272) 00:28:32.472 1735.215 - 1742.662: 25.4806% ( 274) 00:28:32.472 1742.662 - 1750.109: 25.9001% ( 247) 00:28:32.472 1750.109 - 1757.556: 26.3655% ( 274) 00:28:32.472 1757.556 - 1765.004: 26.8207% ( 268) 00:28:32.472 1765.004 - 1772.451: 27.2334% ( 243) 00:28:32.472 1772.451 - 1779.898: 27.7055% ( 278) 00:28:32.472 1779.898 - 1787.345: 28.1454% ( 259) 00:28:32.472 1787.345 - 1794.793: 28.6022% ( 269) 00:28:32.472 1794.793 - 1802.240: 29.0506% ( 264) 00:28:32.472 1802.240 - 1809.687: 29.4905% ( 259) 00:28:32.472 1809.687 - 1817.135: 29.9643% ( 279) 00:28:32.472 1817.135 - 1824.582: 30.3838% ( 247) 00:28:32.472 1824.582 - 1832.029: 30.8543% ( 277) 00:28:32.472 1832.029 - 1839.476: 31.2721% ( 246) 00:28:32.472 1839.476 - 1846.924: 31.7340% ( 272) 00:28:32.472 1846.924 - 1854.371: 32.1773% ( 261) 00:28:32.472 1854.371 - 1861.818: 32.6172% ( 259) 00:28:32.472 1861.818 - 1869.265: 33.0673% ( 265) 00:28:32.472 1869.265 - 1876.713: 33.5071% ( 259) 00:28:32.472 1876.713 - 1884.160: 33.9623% ( 268) 00:28:32.472 1884.160 - 1891.607: 34.4005% ( 258) 00:28:32.472 1891.607 - 1899.055: 34.8285% ( 252) 00:28:32.472 1899.055 - 1906.502: 35.2836% ( 268) 00:28:32.472 1906.502 - 1921.396: 36.1685% ( 521) 00:28:32.472 1921.396 - 1936.291: 37.0228% ( 503) 00:28:32.472 1936.291 - 1951.185: 37.9297% ( 534) 00:28:32.472 1951.185 - 1966.080: 38.8043% ( 515) 00:28:32.472 1966.080 - 1980.975: 39.6943% ( 524) 00:28:32.472 1980.975 - 1995.869: 40.5757% ( 519) 00:28:32.472 1995.869 - 2010.764: 41.4487% ( 514) 00:28:32.472 2010.764 - 2025.658: 42.3692% ( 542) 00:28:32.472 2025.658 - 2040.553: 43.2575% ( 523) 00:28:32.472 2040.553 - 2055.447: 44.1780% ( 542) 00:28:32.472 2055.447 - 2070.342: 45.0493% ( 513) 00:28:32.472 2070.342 - 2085.236: 45.9409% ( 525) 00:28:32.472 2085.236 - 2100.131: 46.8597% ( 541) 00:28:32.472 2100.131 - 2115.025: 47.7463% ( 522) 00:28:32.472 2115.025 - 2129.920: 48.6226% ( 516) 00:28:32.472 2129.920 - 2144.815: 49.5092% ( 522) 00:28:32.472 2144.815 - 2159.709: 50.4263% ( 540) 00:28:32.472 2159.709 - 2174.604: 51.3111% ( 521) 00:28:32.472 2174.604 - 2189.498: 52.1671% ( 504) 00:28:32.472 2189.498 - 2204.393: 53.0774% ( 536) 00:28:32.472 2204.393 - 2219.287: 53.9555% ( 517) 00:28:32.472 2219.287 - 2234.182: 54.8319% ( 516) 00:28:32.472 2234.182 - 2249.076: 55.7048% ( 514) 00:28:32.472 2249.076 - 2263.971: 56.6270% ( 543) 00:28:32.472 2263.971 - 2278.865: 57.4983% ( 513) 00:28:32.472 2278.865 - 2293.760: 58.4001% ( 531) 00:28:32.472 2293.760 - 2308.655: 59.2731% ( 514) 00:28:32.472 2308.655 - 2323.549: 60.1613% ( 523) 00:28:32.472 2323.549 - 2338.444: 61.0445% ( 520) 00:28:32.472 2338.444 - 2353.338: 61.9395% ( 527) 00:28:32.472 2353.338 - 2368.233: 62.8482% ( 535) 00:28:32.472 2368.233 - 2383.127: 63.7381% ( 524) 00:28:32.472 2383.127 - 2398.022: 64.6145% ( 516) 00:28:32.472 2398.022 - 2412.916: 65.4976% ( 520) 00:28:32.472 2412.916 - 2427.811: 66.4198% ( 543) 00:28:32.472 2427.811 - 2442.705: 67.2979% ( 517) 00:28:32.472 2442.705 - 2457.600: 68.1912% ( 526) 00:28:32.472 2457.600 - 2472.495: 69.1101% ( 541) 00:28:32.472 2472.495 - 2487.389: 70.0187% ( 535) 00:28:32.472 2487.389 - 2502.284: 70.8950% ( 516) 00:28:32.472 2502.284 - 2517.178: 71.8156% ( 542) 00:28:32.472 2517.178 - 2532.073: 72.7089% ( 526) 00:28:32.472 2532.073 - 2546.967: 73.6294% ( 542) 00:28:32.472 2546.967 - 2561.862: 74.5312% ( 531) 00:28:32.472 2561.862 - 2576.756: 75.4144% ( 520) 00:28:32.472 2576.756 - 2591.651: 76.2942% ( 518) 00:28:32.472 2591.651 - 2606.545: 77.2045% ( 536) 00:28:32.472 2606.545 - 2621.440: 78.1046% ( 530) 00:28:32.472 2621.440 - 2636.335: 79.0099% ( 533) 00:28:32.472 2636.335 - 2651.229: 79.8964% ( 522) 00:28:32.472 2651.229 - 2666.124: 80.7796% ( 520) 00:28:32.472 2666.124 - 2681.018: 81.6508% ( 513) 00:28:32.472 2681.018 - 2695.913: 82.5476% ( 528) 00:28:32.472 2695.913 - 2710.807: 83.4307% ( 520) 00:28:32.472 2710.807 - 2725.702: 84.2952% ( 509) 00:28:32.472 2725.702 - 2740.596: 85.1783% ( 520) 00:28:32.472 2740.596 - 2755.491: 86.0071% ( 488) 00:28:32.472 2755.491 - 2770.385: 86.7782% ( 454) 00:28:32.472 2770.385 - 2785.280: 87.5730% ( 468) 00:28:32.472 2785.280 - 2800.175: 88.2914% ( 423) 00:28:32.472 2800.175 - 2815.069: 88.9912% ( 412) 00:28:32.472 2815.069 - 2829.964: 89.6365% ( 380) 00:28:32.472 2829.964 - 2844.858: 90.2259% ( 347) 00:28:32.472 2844.858 - 2859.753: 90.7711% ( 321) 00:28:32.472 2859.753 - 2874.647: 91.2704% ( 294) 00:28:32.472 2874.647 - 2889.542: 91.6950% ( 250) 00:28:32.472 2889.542 - 2904.436: 92.1009% ( 239) 00:28:32.472 2904.436 - 2919.331: 92.4643% ( 214) 00:28:32.472 2919.331 - 2934.225: 92.8040% ( 200) 00:28:32.472 2934.225 - 2949.120: 93.0893% ( 168) 00:28:32.472 2949.120 - 2964.015: 93.3645% ( 162) 00:28:32.472 2964.015 - 2978.909: 93.6107% ( 145) 00:28:32.472 2978.909 - 2993.804: 93.8451% ( 138) 00:28:32.472 2993.804 - 3008.698: 94.0608% ( 127) 00:28:32.472 3008.698 - 3023.593: 94.2850% ( 132) 00:28:32.472 3023.593 - 3038.487: 94.4871% ( 119) 00:28:32.472 3038.487 - 3053.382: 94.6875% ( 118) 00:28:32.472 3053.382 - 3068.276: 94.8675% ( 106) 00:28:32.472 3068.276 - 3083.171: 95.0442% ( 104) 00:28:32.472 3083.171 - 3098.065: 95.2242% ( 106) 00:28:32.472 3098.065 - 3112.960: 95.3957% ( 101) 00:28:32.472 3112.960 - 3127.855: 95.5639% ( 99) 00:28:32.472 3127.855 - 3142.749: 95.7422% ( 105) 00:28:32.472 3142.749 - 3157.644: 95.9120% ( 100) 00:28:32.472 3157.644 - 3172.538: 96.0836% ( 101) 00:28:32.472 3172.538 - 3187.433: 96.2534% ( 100) 00:28:32.472 3187.433 - 3202.327: 96.4368% ( 108) 00:28:32.472 3202.327 - 3217.222: 96.6084% ( 101) 00:28:32.472 3217.222 - 3232.116: 96.7867% ( 105) 00:28:32.472 3232.116 - 3247.011: 96.9599% ( 102) 00:28:32.472 3247.011 - 3261.905: 97.1349% ( 103) 00:28:32.472 3261.905 - 3276.800: 97.3115% ( 104) 00:28:32.472 3276.800 - 3291.695: 97.4932% ( 107) 00:28:32.472 3291.695 - 3306.589: 97.6647% ( 101) 00:28:32.472 3306.589 - 3321.484: 97.8278% ( 96) 00:28:32.472 3321.484 - 3336.378: 97.9925% ( 97) 00:28:32.472 3336.378 - 3351.273: 98.1471% ( 91) 00:28:32.472 3351.273 - 3366.167: 98.3016% ( 91) 00:28:32.472 3366.167 - 3381.062: 98.4477% ( 86) 00:28:32.472 3381.062 - 3395.956: 98.6005% ( 90) 00:28:32.472 3395.956 - 3410.851: 98.7398% ( 82) 00:28:32.472 3410.851 - 3425.745: 98.8757% ( 80) 00:28:32.472 3425.745 - 3440.640: 99.0031% ( 75) 00:28:32.472 3440.640 - 3455.535: 99.1118% ( 64) 00:28:32.472 3455.535 - 3470.429: 99.2204% ( 64) 00:28:32.472 3470.429 - 3485.324: 99.3105% ( 53) 00:28:32.472 3485.324 - 3500.218: 99.3852% ( 44) 00:28:32.472 3500.218 - 3515.113: 99.4429% ( 34) 00:28:32.472 3515.113 - 3530.007: 99.4956% ( 31) 00:28:32.472 3530.007 - 3544.902: 99.5363% ( 24) 00:28:32.472 3544.902 - 3559.796: 99.5652% ( 17) 00:28:32.472 3559.796 - 3574.691: 99.5890% ( 14) 00:28:32.472 3574.691 - 3589.585: 99.6094% ( 12) 00:28:32.472 3589.585 - 3604.480: 99.6247% ( 9) 00:28:32.472 3604.480 - 3619.375: 99.6332% ( 5) 00:28:32.472 3619.375 - 3634.269: 99.6433% ( 6) 00:28:32.472 3634.269 - 3649.164: 99.6518% ( 5) 00:28:32.472 3649.164 - 3664.058: 99.6603% ( 5) 00:28:32.472 3664.058 - 3678.953: 99.6705% ( 6) 00:28:32.472 3678.953 - 3693.847: 99.6756% ( 3) 00:28:32.472 3693.847 - 3708.742: 99.6841% ( 5) 00:28:32.472 3708.742 - 3723.636: 99.6909% ( 4) 00:28:32.472 3723.636 - 3738.531: 99.6960% ( 3) 00:28:32.472 3738.531 - 3753.425: 99.7011% ( 3) 00:28:32.472 3753.425 - 3768.320: 99.7062% ( 3) 00:28:32.472 3768.320 - 3783.215: 99.7113% ( 3) 00:28:32.472 3783.215 - 3798.109: 99.7164% ( 3) 00:28:32.472 3798.109 - 3813.004: 99.7198% ( 2) 00:28:32.472 3813.004 - 3842.793: 99.7283% ( 5) 00:28:32.472 3842.793 - 3872.582: 99.7368% ( 5) 00:28:32.472 3872.582 - 3902.371: 99.7418% ( 3) 00:28:32.473 3902.371 - 3932.160: 99.7503% ( 5) 00:28:32.473 3932.160 - 3961.949: 99.7588% ( 5) 00:28:32.473 3961.949 - 3991.738: 99.7656% ( 4) 00:28:32.473 3991.738 - 4021.527: 99.7741% ( 5) 00:28:32.473 4021.527 - 4051.316: 99.7809% ( 4) 00:28:32.473 4051.316 - 4081.105: 99.7894% ( 5) 00:28:32.473 4081.105 - 4110.895: 99.7979% ( 5) 00:28:32.473 4110.895 - 4140.684: 99.8064% ( 5) 00:28:32.473 4140.684 - 4170.473: 99.8149% ( 5) 00:28:32.473 4170.473 - 4200.262: 99.8217% ( 4) 00:28:32.473 4200.262 - 4230.051: 99.8285% ( 4) 00:28:32.473 4230.051 - 4259.840: 99.8370% ( 5) 00:28:32.473 4259.840 - 4289.629: 99.8454% ( 5) 00:28:32.473 4289.629 - 4319.418: 99.8522% ( 4) 00:28:32.473 4319.418 - 4349.207: 99.8607% ( 5) 00:28:32.473 4349.207 - 4378.996: 99.8675% ( 4) 00:28:32.473 4378.996 - 4408.785: 99.8726% ( 3) 00:28:32.473 4408.785 - 4438.575: 99.8777% ( 3) 00:28:32.473 4438.575 - 4468.364: 99.8828% ( 3) 00:28:32.473 4468.364 - 4498.153: 99.8862% ( 2) 00:28:32.473 4498.153 - 4527.942: 99.8913% ( 3) 00:28:32.473 4527.942 - 4557.731: 99.8930% ( 1) 00:28:32.473 4617.309 - 4647.098: 99.8947% ( 1) 00:28:32.473 4647.098 - 4676.887: 99.8964% ( 1) 00:28:32.473 4676.887 - 4706.676: 99.8981% ( 1) 00:28:32.473 4706.676 - 4736.465: 99.8998% ( 1) 00:28:32.473 4736.465 - 4766.255: 99.9015% ( 1) 00:28:32.473 4766.255 - 4796.044: 99.9032% ( 1) 00:28:32.473 4796.044 - 4825.833: 99.9049% ( 1) 00:28:32.473 4825.833 - 4855.622: 99.9066% ( 1) 00:28:32.473 4855.622 - 4885.411: 99.9083% ( 1) 00:28:32.473 4885.411 - 4915.200: 99.9100% ( 1) 00:28:32.473 4915.200 - 4944.989: 99.9117% ( 1) 00:28:32.473 4944.989 - 4974.778: 99.9134% ( 1) 00:28:32.473 4974.778 - 5004.567: 99.9151% ( 1) 00:28:32.473 5004.567 - 5034.356: 99.9168% ( 1) 00:28:32.473 5034.356 - 5064.145: 99.9185% ( 1) 00:28:32.473 5064.145 - 5093.935: 99.9202% ( 1) 00:28:32.473 5093.935 - 5123.724: 99.9219% ( 1) 00:28:32.473 5123.724 - 5153.513: 99.9236% ( 1) 00:28:32.473 5153.513 - 5183.302: 99.9253% ( 1) 00:28:32.473 5183.302 - 5213.091: 99.9270% ( 1) 00:28:32.473 5213.091 - 5242.880: 99.9287% ( 1) 00:28:32.473 5242.880 - 5272.669: 99.9304% ( 1) 00:28:32.473 5272.669 - 5302.458: 99.9321% ( 1) 00:28:32.473 5302.458 - 5332.247: 99.9338% ( 1) 00:28:32.473 5332.247 - 5362.036: 99.9355% ( 1) 00:28:32.473 5362.036 - 5391.825: 99.9372% ( 1) 00:28:32.473 5391.825 - 5421.615: 99.9389% ( 1) 00:28:32.473 5421.615 - 5451.404: 99.9406% ( 1) 00:28:32.473 5451.404 - 5481.193: 99.9423% ( 1) 00:28:32.473 5481.193 - 5510.982: 99.9440% ( 1) 00:28:32.473 5510.982 - 5540.771: 99.9457% ( 1) 00:28:32.473 5540.771 - 5570.560: 99.9474% ( 1) 00:28:32.473 5570.560 - 5600.349: 99.9490% ( 1) 00:28:32.473 5600.349 - 5630.138: 99.9507% ( 1) 00:28:32.473 5630.138 - 5659.927: 99.9524% ( 1) 00:28:32.473 5659.927 - 5689.716: 99.9541% ( 1) 00:28:32.473 5689.716 - 5719.505: 99.9558% ( 1) 00:28:32.473 5719.505 - 5749.295: 99.9575% ( 1) 00:28:32.473 5749.295 - 5779.084: 99.9592% ( 1) 00:28:32.473 5779.084 - 5808.873: 99.9609% ( 1) 00:28:32.473 5808.873 - 5838.662: 99.9626% ( 1) 00:28:32.473 5838.662 - 5868.451: 99.9643% ( 1) 00:28:32.473 5868.451 - 5898.240: 99.9660% ( 1) 00:28:32.473 5898.240 - 5928.029: 99.9677% ( 1) 00:28:32.473 5928.029 - 5957.818: 99.9694% ( 1) 00:28:32.473 5987.607 - 6017.396: 99.9711% ( 1) 00:28:32.473 6017.396 - 6047.185: 99.9728% ( 1) 00:28:32.473 6047.185 - 6076.975: 99.9745% ( 1) 00:28:32.473 6076.975 - 6106.764: 99.9762% ( 1) 00:28:32.473 6106.764 - 6136.553: 99.9779% ( 1) 00:28:32.473 6136.553 - 6166.342: 99.9796% ( 1) 00:28:32.473 6196.131 - 6225.920: 99.9830% ( 2) 00:28:32.473 6225.920 - 6255.709: 99.9847% ( 1) 00:28:32.473 6255.709 - 6285.498: 99.9864% ( 1) 00:28:32.473 6285.498 - 6315.287: 99.9881% ( 1) 00:28:32.473 6315.287 - 6345.076: 99.9898% ( 1) 00:28:32.473 6345.076 - 6374.865: 99.9915% ( 1) 00:28:32.473 6374.865 - 6404.655: 99.9932% ( 1) 00:28:32.473 6404.655 - 6434.444: 99.9949% ( 1) 00:28:32.473 6434.444 - 6464.233: 99.9966% ( 1) 00:28:32.473 6464.233 - 6494.022: 99.9983% ( 1) 00:28:32.473 6523.811 - 6553.600: 100.0000% ( 1) 00:28:32.473 00:28:32.473 05:08:55 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:28:33.850 Initializing NVMe Controllers 00:28:33.850 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:33.850 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:33.850 Initialization complete. Launching workers. 00:28:33.850 ======================================================== 00:28:33.850 Latency(us) 00:28:33.850 Device Information : IOPS MiB/s Average min max 00:28:33.850 PCIE (0000:00:06.0) NSID 1 from core 0: 49271.08 577.40 2601.69 1491.14 5030.36 00:28:33.850 ======================================================== 00:28:33.850 Total : 49271.08 577.40 2601.69 1491.14 5030.36 00:28:33.850 00:28:33.850 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:33.850 ================================================================================= 00:28:33.850 1.00000% : 1772.451us 00:28:33.850 10.00000% : 1980.975us 00:28:33.850 25.00000% : 2189.498us 00:28:33.850 50.00000% : 2591.651us 00:28:33.850 75.00000% : 2993.804us 00:28:33.850 90.00000% : 3247.011us 00:28:33.850 95.00000% : 3381.062us 00:28:33.850 98.00000% : 3515.113us 00:28:33.850 99.00000% : 3619.375us 00:28:33.850 99.50000% : 3738.531us 00:28:33.850 99.90000% : 4349.207us 00:28:33.850 99.99000% : 4944.989us 00:28:33.850 99.99900% : 5034.356us 00:28:33.850 99.99990% : 5034.356us 00:28:33.850 99.99999% : 5034.356us 00:28:33.850 00:28:33.850 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:33.850 ============================================================================== 00:28:33.850 Range in us Cumulative IO count 00:28:33.850 1489.455 - 1496.902: 0.0020% ( 1) 00:28:33.850 1556.480 - 1563.927: 0.0041% ( 1) 00:28:33.850 1571.375 - 1578.822: 0.0081% ( 2) 00:28:33.850 1586.269 - 1593.716: 0.0183% ( 5) 00:28:33.850 1601.164 - 1608.611: 0.0264% ( 4) 00:28:33.850 1608.611 - 1616.058: 0.0345% ( 4) 00:28:33.850 1616.058 - 1623.505: 0.0426% ( 4) 00:28:33.850 1623.505 - 1630.953: 0.0568% ( 7) 00:28:33.850 1630.953 - 1638.400: 0.0629% ( 3) 00:28:33.850 1638.400 - 1645.847: 0.0731% ( 5) 00:28:33.850 1645.847 - 1653.295: 0.1075% ( 17) 00:28:33.850 1653.295 - 1660.742: 0.1360% ( 14) 00:28:33.850 1660.742 - 1668.189: 0.1623% ( 13) 00:28:33.850 1668.189 - 1675.636: 0.1989% ( 18) 00:28:33.850 1675.636 - 1683.084: 0.2212% ( 11) 00:28:33.850 1683.084 - 1690.531: 0.2618% ( 20) 00:28:33.850 1690.531 - 1697.978: 0.2942% ( 16) 00:28:33.850 1697.978 - 1705.425: 0.3287% ( 17) 00:28:33.850 1705.425 - 1712.873: 0.3876% ( 29) 00:28:33.850 1712.873 - 1720.320: 0.4566% ( 34) 00:28:33.850 1720.320 - 1727.767: 0.5337% ( 38) 00:28:33.850 1727.767 - 1735.215: 0.6108% ( 38) 00:28:33.850 1735.215 - 1742.662: 0.6818% ( 35) 00:28:33.850 1742.662 - 1750.109: 0.7853% ( 51) 00:28:33.850 1750.109 - 1757.556: 0.8665% ( 40) 00:28:33.850 1757.556 - 1765.004: 0.9700% ( 51) 00:28:33.850 1765.004 - 1772.451: 1.0836% ( 56) 00:28:33.850 1772.451 - 1779.898: 1.1993% ( 57) 00:28:33.850 1779.898 - 1787.345: 1.3373% ( 68) 00:28:33.850 1787.345 - 1794.793: 1.4935% ( 77) 00:28:33.850 1794.793 - 1802.240: 1.6477% ( 76) 00:28:33.850 1802.240 - 1809.687: 1.8344% ( 92) 00:28:33.850 1809.687 - 1817.135: 2.0637% ( 113) 00:28:33.850 1817.135 - 1824.582: 2.3194% ( 126) 00:28:33.850 1824.582 - 1832.029: 2.5933% ( 135) 00:28:33.850 1832.029 - 1839.476: 2.8612% ( 132) 00:28:33.850 1839.476 - 1846.924: 3.1636% ( 149) 00:28:33.850 1846.924 - 1854.371: 3.4395% ( 136) 00:28:33.850 1854.371 - 1861.818: 3.7804% ( 168) 00:28:33.850 1861.818 - 1869.265: 4.1396% ( 177) 00:28:33.850 1869.265 - 1876.713: 4.4968% ( 176) 00:28:33.850 1876.713 - 1884.160: 4.8052% ( 152) 00:28:33.850 1884.160 - 1891.607: 5.2050% ( 197) 00:28:33.850 1891.607 - 1899.055: 5.5682% ( 179) 00:28:33.850 1899.055 - 1906.502: 5.9903% ( 208) 00:28:33.850 1906.502 - 1921.396: 6.8364% ( 417) 00:28:33.850 1921.396 - 1936.291: 7.7699% ( 460) 00:28:33.851 1936.291 - 1951.185: 8.6851% ( 451) 00:28:33.851 1951.185 - 1966.080: 9.6510% ( 476) 00:28:33.851 1966.080 - 1980.975: 10.6169% ( 476) 00:28:33.851 1980.975 - 1995.869: 11.6518% ( 510) 00:28:33.851 1995.869 - 2010.764: 12.6948% ( 514) 00:28:33.851 2010.764 - 2025.658: 13.7358% ( 513) 00:28:33.851 2025.658 - 2040.553: 14.7565% ( 503) 00:28:33.851 2040.553 - 2055.447: 15.8036% ( 516) 00:28:33.851 2055.447 - 2070.342: 16.8892% ( 535) 00:28:33.851 2070.342 - 2085.236: 17.9992% ( 547) 00:28:33.851 2085.236 - 2100.131: 19.0442% ( 515) 00:28:33.851 2100.131 - 2115.025: 20.1360% ( 538) 00:28:33.851 2115.025 - 2129.920: 21.1952% ( 522) 00:28:33.851 2129.920 - 2144.815: 22.2463% ( 518) 00:28:33.851 2144.815 - 2159.709: 23.2812% ( 510) 00:28:33.851 2159.709 - 2174.604: 24.3243% ( 514) 00:28:33.851 2174.604 - 2189.498: 25.3105% ( 486) 00:28:33.851 2189.498 - 2204.393: 26.3291% ( 502) 00:28:33.851 2204.393 - 2219.287: 27.3133% ( 485) 00:28:33.851 2219.287 - 2234.182: 28.3239% ( 498) 00:28:33.851 2234.182 - 2249.076: 29.3040% ( 483) 00:28:33.851 2249.076 - 2263.971: 30.2739% ( 478) 00:28:33.851 2263.971 - 2278.865: 31.2399% ( 476) 00:28:33.851 2278.865 - 2293.760: 32.1692% ( 458) 00:28:33.851 2293.760 - 2308.655: 33.1230% ( 470) 00:28:33.851 2308.655 - 2323.549: 34.0483% ( 456) 00:28:33.851 2323.549 - 2338.444: 34.9838% ( 461) 00:28:33.851 2338.444 - 2353.338: 35.9010% ( 452) 00:28:33.851 2353.338 - 2368.233: 36.8324% ( 459) 00:28:33.851 2368.233 - 2383.127: 37.7476% ( 451) 00:28:33.851 2383.127 - 2398.022: 38.6343% ( 437) 00:28:33.851 2398.022 - 2412.916: 39.4663% ( 410) 00:28:33.851 2412.916 - 2427.811: 40.3673% ( 444) 00:28:33.851 2427.811 - 2442.705: 41.2378% ( 429) 00:28:33.851 2442.705 - 2457.600: 42.1205% ( 435) 00:28:33.851 2457.600 - 2472.495: 42.9748% ( 421) 00:28:33.851 2472.495 - 2487.389: 43.8860% ( 449) 00:28:33.851 2487.389 - 2502.284: 44.7788% ( 440) 00:28:33.851 2502.284 - 2517.178: 45.6595% ( 434) 00:28:33.851 2517.178 - 2532.073: 46.5625% ( 445) 00:28:33.851 2532.073 - 2546.967: 47.4432% ( 434) 00:28:33.851 2546.967 - 2561.862: 48.3462% ( 445) 00:28:33.851 2561.862 - 2576.756: 49.2350% ( 438) 00:28:33.851 2576.756 - 2591.651: 50.1218% ( 437) 00:28:33.851 2591.651 - 2606.545: 51.0085% ( 437) 00:28:33.851 2606.545 - 2621.440: 51.8811% ( 430) 00:28:33.851 2621.440 - 2636.335: 52.7719% ( 439) 00:28:33.851 2636.335 - 2651.229: 53.6830% ( 449) 00:28:33.851 2651.229 - 2666.124: 54.5678% ( 436) 00:28:33.851 2666.124 - 2681.018: 55.4748% ( 447) 00:28:33.851 2681.018 - 2695.913: 56.4407% ( 476) 00:28:33.851 2695.913 - 2710.807: 57.3438% ( 445) 00:28:33.851 2710.807 - 2725.702: 58.2752% ( 459) 00:28:33.851 2725.702 - 2740.596: 59.2086% ( 460) 00:28:33.851 2740.596 - 2755.491: 60.1603% ( 469) 00:28:33.851 2755.491 - 2770.385: 61.0877% ( 457) 00:28:33.851 2770.385 - 2785.280: 62.0475% ( 473) 00:28:33.851 2785.280 - 2800.175: 62.9748% ( 457) 00:28:33.851 2800.175 - 2815.069: 63.9184% ( 465) 00:28:33.851 2815.069 - 2829.964: 64.8600% ( 464) 00:28:33.851 2829.964 - 2844.858: 65.7995% ( 463) 00:28:33.851 2844.858 - 2859.753: 66.7269% ( 457) 00:28:33.851 2859.753 - 2874.647: 67.6461% ( 453) 00:28:33.851 2874.647 - 2889.542: 68.6019% ( 471) 00:28:33.851 2889.542 - 2904.436: 69.5455% ( 465) 00:28:33.851 2904.436 - 2919.331: 70.4769% ( 459) 00:28:33.851 2919.331 - 2934.225: 71.4022% ( 456) 00:28:33.851 2934.225 - 2949.120: 72.3519% ( 468) 00:28:33.851 2949.120 - 2964.015: 73.3036% ( 469) 00:28:33.851 2964.015 - 2978.909: 74.2593% ( 471) 00:28:33.851 2978.909 - 2993.804: 75.1928% ( 460) 00:28:33.851 2993.804 - 3008.698: 76.1587% ( 476) 00:28:33.851 3008.698 - 3023.593: 77.0759% ( 452) 00:28:33.851 3023.593 - 3038.487: 78.0357% ( 473) 00:28:33.851 3038.487 - 3053.382: 78.9428% ( 447) 00:28:33.851 3053.382 - 3068.276: 79.8580% ( 451) 00:28:33.851 3068.276 - 3083.171: 80.7670% ( 448) 00:28:33.851 3083.171 - 3098.065: 81.6964% ( 458) 00:28:33.851 3098.065 - 3112.960: 82.5893% ( 440) 00:28:33.851 3112.960 - 3127.855: 83.5045% ( 451) 00:28:33.851 3127.855 - 3142.749: 84.3892% ( 436) 00:28:33.851 3142.749 - 3157.644: 85.2618% ( 430) 00:28:33.851 3157.644 - 3172.538: 86.1080% ( 417) 00:28:33.851 3172.538 - 3187.433: 86.9278% ( 404) 00:28:33.851 3187.433 - 3202.327: 87.7922% ( 426) 00:28:33.851 3202.327 - 3217.222: 88.6222% ( 409) 00:28:33.851 3217.222 - 3232.116: 89.3628% ( 365) 00:28:33.851 3232.116 - 3247.011: 90.1664% ( 396) 00:28:33.851 3247.011 - 3261.905: 90.8908% ( 357) 00:28:33.851 3261.905 - 3276.800: 91.5483% ( 324) 00:28:33.851 3276.800 - 3291.695: 92.2098% ( 326) 00:28:33.851 3291.695 - 3306.589: 92.8328% ( 307) 00:28:33.851 3306.589 - 3321.484: 93.4131% ( 286) 00:28:33.851 3321.484 - 3336.378: 93.9590% ( 269) 00:28:33.851 3336.378 - 3351.273: 94.4359% ( 235) 00:28:33.851 3351.273 - 3366.167: 94.9006% ( 229) 00:28:33.851 3366.167 - 3381.062: 95.3795% ( 236) 00:28:33.851 3381.062 - 3395.956: 95.7772% ( 196) 00:28:33.851 3395.956 - 3410.851: 96.1688% ( 193) 00:28:33.851 3410.851 - 3425.745: 96.5564% ( 191) 00:28:33.851 3425.745 - 3440.640: 96.8791% ( 159) 00:28:33.851 3440.640 - 3455.535: 97.1936% ( 155) 00:28:33.851 3455.535 - 3470.429: 97.4756% ( 139) 00:28:33.851 3470.429 - 3485.324: 97.7354% ( 128) 00:28:33.851 3485.324 - 3500.218: 97.9363% ( 99) 00:28:33.851 3500.218 - 3515.113: 98.1392% ( 100) 00:28:33.851 3515.113 - 3530.007: 98.3157% ( 87) 00:28:33.851 3530.007 - 3544.902: 98.4740% ( 78) 00:28:33.851 3544.902 - 3559.796: 98.6201% ( 72) 00:28:33.851 3559.796 - 3574.691: 98.7439% ( 61) 00:28:33.851 3574.691 - 3589.585: 98.8494% ( 52) 00:28:33.851 3589.585 - 3604.480: 98.9529% ( 51) 00:28:33.851 3604.480 - 3619.375: 99.0442% ( 45) 00:28:33.851 3619.375 - 3634.269: 99.1234% ( 39) 00:28:33.851 3634.269 - 3649.164: 99.1843% ( 30) 00:28:33.851 3649.164 - 3664.058: 99.2593% ( 37) 00:28:33.851 3664.058 - 3678.953: 99.3202% ( 30) 00:28:33.851 3678.953 - 3693.847: 99.3709% ( 25) 00:28:33.851 3693.847 - 3708.742: 99.4217% ( 25) 00:28:33.851 3708.742 - 3723.636: 99.4643% ( 21) 00:28:33.851 3723.636 - 3738.531: 99.5049% ( 20) 00:28:33.851 3738.531 - 3753.425: 99.5394% ( 17) 00:28:33.851 3753.425 - 3768.320: 99.5739% ( 17) 00:28:33.851 3768.320 - 3783.215: 99.5982% ( 12) 00:28:33.851 3783.215 - 3798.109: 99.6226% ( 12) 00:28:33.851 3798.109 - 3813.004: 99.6388% ( 8) 00:28:33.851 3813.004 - 3842.793: 99.6692% ( 15) 00:28:33.851 3842.793 - 3872.582: 99.6916% ( 11) 00:28:33.851 3872.582 - 3902.371: 99.7179% ( 13) 00:28:33.851 3902.371 - 3932.160: 99.7382% ( 10) 00:28:33.851 3932.160 - 3961.949: 99.7606% ( 11) 00:28:33.851 3961.949 - 3991.738: 99.7829% ( 11) 00:28:33.851 3991.738 - 4021.527: 99.7991% ( 8) 00:28:33.851 4021.527 - 4051.316: 99.8153% ( 8) 00:28:33.851 4051.316 - 4081.105: 99.8255% ( 5) 00:28:33.851 4081.105 - 4110.895: 99.8397% ( 7) 00:28:33.851 4110.895 - 4140.684: 99.8498% ( 5) 00:28:33.851 4140.684 - 4170.473: 99.8580% ( 4) 00:28:33.851 4170.473 - 4200.262: 99.8681% ( 5) 00:28:33.851 4200.262 - 4230.051: 99.8742% ( 3) 00:28:33.851 4230.051 - 4259.840: 99.8823% ( 4) 00:28:33.851 4259.840 - 4289.629: 99.8884% ( 3) 00:28:33.851 4289.629 - 4319.418: 99.8965% ( 4) 00:28:33.851 4319.418 - 4349.207: 99.9046% ( 4) 00:28:33.851 4349.207 - 4378.996: 99.9107% ( 3) 00:28:33.851 4378.996 - 4408.785: 99.9188% ( 4) 00:28:33.851 4408.785 - 4438.575: 99.9249% ( 3) 00:28:33.851 4438.575 - 4468.364: 99.9330% ( 4) 00:28:33.851 4468.364 - 4498.153: 99.9391% ( 3) 00:28:33.851 4498.153 - 4527.942: 99.9432% ( 2) 00:28:33.851 4527.942 - 4557.731: 99.9493% ( 3) 00:28:33.851 4557.731 - 4587.520: 99.9513% ( 1) 00:28:33.851 4587.520 - 4617.309: 99.9533% ( 1) 00:28:33.851 4676.887 - 4706.676: 99.9554% ( 1) 00:28:33.851 4736.465 - 4766.255: 99.9574% ( 1) 00:28:33.852 4766.255 - 4796.044: 99.9655% ( 4) 00:28:33.852 4796.044 - 4825.833: 99.9696% ( 2) 00:28:33.852 4825.833 - 4855.622: 99.9817% ( 6) 00:28:33.852 4855.622 - 4885.411: 99.9838% ( 1) 00:28:33.852 4885.411 - 4915.200: 99.9899% ( 3) 00:28:33.852 4915.200 - 4944.989: 99.9939% ( 2) 00:28:33.852 4944.989 - 4974.778: 99.9959% ( 1) 00:28:33.852 4974.778 - 5004.567: 99.9980% ( 1) 00:28:33.852 5004.567 - 5034.356: 100.0000% ( 1) 00:28:33.852 00:28:33.852 05:08:57 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:28:33.852 00:28:33.852 real 0m2.654s 00:28:33.852 user 0m2.257s 00:28:33.852 sys 0m0.309s 00:28:33.852 05:08:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:33.852 05:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:33.852 ************************************ 00:28:33.852 END TEST nvme_perf 00:28:33.852 ************************************ 00:28:33.852 05:08:57 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:28:33.852 05:08:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:28:33.852 05:08:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:33.852 05:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:33.852 ************************************ 00:28:33.852 START TEST nvme_hello_world 00:28:33.852 ************************************ 00:28:33.852 05:08:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:28:33.852 Initializing NVMe Controllers 00:28:33.852 Attached to 0000:00:06.0 00:28:33.852 Namespace ID: 1 size: 5GB 00:28:33.852 Initialization complete. 00:28:33.852 INFO: using host memory buffer for IO 00:28:33.852 Hello world! 00:28:33.852 00:28:33.852 real 0m0.304s 00:28:33.852 user 0m0.114s 00:28:33.852 sys 0m0.144s 00:28:33.852 05:08:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:33.852 05:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:33.852 ************************************ 00:28:33.852 END TEST nvme_hello_world 00:28:33.852 ************************************ 00:28:34.111 05:08:57 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:28:34.111 05:08:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:34.111 05:08:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:34.111 05:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:34.111 ************************************ 00:28:34.111 START TEST nvme_sgl 00:28:34.111 ************************************ 00:28:34.111 05:08:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:28:34.370 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:28:34.370 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:28:34.370 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:28:34.370 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:28:34.370 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:28:34.370 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:28:34.370 NVMe Readv/Writev Request test 00:28:34.370 Attached to 0000:00:06.0 00:28:34.370 0000:00:06.0: build_io_request_2 test passed 00:28:34.370 0000:00:06.0: build_io_request_4 test passed 00:28:34.370 0000:00:06.0: build_io_request_5 test passed 00:28:34.370 0000:00:06.0: build_io_request_6 test passed 00:28:34.370 0000:00:06.0: build_io_request_7 test passed 00:28:34.370 0000:00:06.0: build_io_request_10 test passed 00:28:34.370 Cleaning up... 00:28:34.370 00:28:34.370 real 0m0.371s 00:28:34.370 user 0m0.194s 00:28:34.370 sys 0m0.132s 00:28:34.370 05:08:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:34.370 05:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:34.370 ************************************ 00:28:34.370 END TEST nvme_sgl 00:28:34.370 ************************************ 00:28:34.370 05:08:57 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:28:34.370 05:08:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:34.370 05:08:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:34.370 05:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:34.370 ************************************ 00:28:34.370 START TEST nvme_e2edp 00:28:34.370 ************************************ 00:28:34.371 05:08:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:28:34.630 NVMe Write/Read with End-to-End data protection test 00:28:34.630 Attached to 0000:00:06.0 00:28:34.630 Cleaning up... 00:28:34.630 00:28:34.630 real 0m0.286s 00:28:34.630 user 0m0.103s 00:28:34.630 sys 0m0.141s 00:28:34.630 05:08:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:34.630 05:08:58 -- common/autotest_common.sh@10 -- # set +x 00:28:34.630 ************************************ 00:28:34.630 END TEST nvme_e2edp 00:28:34.630 ************************************ 00:28:34.630 05:08:58 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:28:34.630 05:08:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:34.630 05:08:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:34.630 05:08:58 -- common/autotest_common.sh@10 -- # set +x 00:28:34.889 ************************************ 00:28:34.889 START TEST nvme_reserve 00:28:34.889 ************************************ 00:28:34.889 05:08:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:28:34.889 ===================================================== 00:28:34.889 NVMe Controller at PCI bus 0, device 6, function 0 00:28:34.889 ===================================================== 00:28:34.889 Reservations: Not Supported 00:28:34.889 Reservation test passed 00:28:34.889 00:28:34.889 real 0m0.251s 00:28:34.889 user 0m0.092s 00:28:34.889 sys 0m0.113s 00:28:34.889 05:08:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:34.889 05:08:58 -- common/autotest_common.sh@10 -- # set +x 00:28:34.889 ************************************ 00:28:34.889 END TEST nvme_reserve 00:28:34.889 ************************************ 00:28:35.149 05:08:58 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:28:35.149 05:08:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:35.149 05:08:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:35.149 05:08:58 -- common/autotest_common.sh@10 -- # set +x 00:28:35.149 ************************************ 00:28:35.149 START TEST nvme_err_injection 00:28:35.149 ************************************ 00:28:35.149 05:08:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:28:35.408 NVMe Error Injection test 00:28:35.408 Attached to 0000:00:06.0 00:28:35.408 0000:00:06.0: get features failed as expected 00:28:35.408 0000:00:06.0: get features successfully as expected 00:28:35.408 0000:00:06.0: read failed as expected 00:28:35.408 0000:00:06.0: read successfully as expected 00:28:35.408 Cleaning up... 00:28:35.408 00:28:35.408 real 0m0.252s 00:28:35.408 user 0m0.104s 00:28:35.408 sys 0m0.104s 00:28:35.408 05:08:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:35.408 05:08:58 -- common/autotest_common.sh@10 -- # set +x 00:28:35.408 ************************************ 00:28:35.408 END TEST nvme_err_injection 00:28:35.408 ************************************ 00:28:35.408 05:08:58 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:28:35.408 05:08:58 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:28:35.408 05:08:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:35.408 05:08:58 -- common/autotest_common.sh@10 -- # set +x 00:28:35.408 ************************************ 00:28:35.408 START TEST nvme_overhead 00:28:35.408 ************************************ 00:28:35.408 05:08:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:28:36.783 Initializing NVMe Controllers 00:28:36.783 Attached to 0000:00:06.0 00:28:36.783 Initialization complete. Launching workers. 00:28:36.784 submit (in ns) avg, min, max = 16899.4, 12717.3, 117320.9 00:28:36.784 complete (in ns) avg, min, max = 12491.7, 8772.7, 61729.1 00:28:36.784 00:28:36.784 Submit histogram 00:28:36.784 ================ 00:28:36.784 Range in us Cumulative Count 00:28:36.784 12.684 - 12.742: 0.0116% ( 1) 00:28:36.784 13.207 - 13.265: 0.0348% ( 2) 00:28:36.784 13.265 - 13.324: 0.0697% ( 3) 00:28:36.784 13.324 - 13.382: 0.1277% ( 5) 00:28:36.784 13.382 - 13.440: 0.2439% ( 10) 00:28:36.784 13.440 - 13.498: 0.4877% ( 21) 00:28:36.784 13.498 - 13.556: 1.1961% ( 61) 00:28:36.784 13.556 - 13.615: 3.0194% ( 157) 00:28:36.784 13.615 - 13.673: 5.7485% ( 235) 00:28:36.784 13.673 - 13.731: 9.9756% ( 364) 00:28:36.784 13.731 - 13.789: 13.7847% ( 328) 00:28:36.784 13.789 - 13.847: 18.1512% ( 376) 00:28:36.784 13.847 - 13.905: 21.2751% ( 269) 00:28:36.784 13.905 - 13.964: 24.6429% ( 290) 00:28:36.784 13.964 - 14.022: 28.9281% ( 369) 00:28:36.784 14.022 - 14.080: 33.5037% ( 394) 00:28:36.784 14.080 - 14.138: 38.2650% ( 410) 00:28:36.784 14.138 - 14.196: 42.2367% ( 342) 00:28:36.784 14.196 - 14.255: 45.1051% ( 247) 00:28:36.784 14.255 - 14.313: 47.7761% ( 230) 00:28:36.784 14.313 - 14.371: 50.7026% ( 252) 00:28:36.784 14.371 - 14.429: 53.5478% ( 245) 00:28:36.784 14.429 - 14.487: 56.4743% ( 252) 00:28:36.784 14.487 - 14.545: 58.5182% ( 176) 00:28:36.784 14.545 - 14.604: 60.2369% ( 148) 00:28:36.784 14.604 - 14.662: 61.7814% ( 133) 00:28:36.784 14.662 - 14.720: 62.9660% ( 102) 00:28:36.784 14.720 - 14.778: 64.0576% ( 94) 00:28:36.784 14.778 - 14.836: 65.0563% ( 86) 00:28:36.784 14.836 - 14.895: 66.1363% ( 93) 00:28:36.784 14.895 - 15.011: 67.9596% ( 157) 00:28:36.784 15.011 - 15.127: 70.0383% ( 179) 00:28:36.784 15.127 - 15.244: 72.2680% ( 192) 00:28:36.784 15.244 - 15.360: 74.8694% ( 224) 00:28:36.784 15.360 - 15.476: 76.7623% ( 163) 00:28:36.784 15.476 - 15.593: 77.7378% ( 84) 00:28:36.784 15.593 - 15.709: 78.4346% ( 60) 00:28:36.784 15.709 - 15.825: 79.2010% ( 66) 00:28:36.784 15.825 - 15.942: 79.8049% ( 52) 00:28:36.784 15.942 - 16.058: 80.1997% ( 34) 00:28:36.784 16.058 - 16.175: 80.5365% ( 29) 00:28:36.784 16.175 - 16.291: 80.7223% ( 16) 00:28:36.784 16.291 - 16.407: 80.8385% ( 10) 00:28:36.784 16.407 - 16.524: 80.9430% ( 9) 00:28:36.784 16.524 - 16.640: 81.0823% ( 12) 00:28:36.784 16.640 - 16.756: 81.1752% ( 8) 00:28:36.784 16.756 - 16.873: 81.2217% ( 4) 00:28:36.784 16.873 - 16.989: 81.3262% ( 9) 00:28:36.784 16.989 - 17.105: 81.3843% ( 5) 00:28:36.784 17.105 - 17.222: 81.4191% ( 3) 00:28:36.784 17.222 - 17.338: 81.4656% ( 4) 00:28:36.784 17.338 - 17.455: 81.5120% ( 4) 00:28:36.784 17.455 - 17.571: 81.5701% ( 5) 00:28:36.784 17.571 - 17.687: 81.6165% ( 4) 00:28:36.784 17.687 - 17.804: 81.6282% ( 1) 00:28:36.784 17.804 - 17.920: 81.6514% ( 2) 00:28:36.784 17.920 - 18.036: 81.6978% ( 4) 00:28:36.784 18.153 - 18.269: 81.7211% ( 2) 00:28:36.784 18.269 - 18.385: 81.7675% ( 4) 00:28:36.784 18.385 - 18.502: 81.8604% ( 8) 00:28:36.784 18.502 - 18.618: 81.9417% ( 7) 00:28:36.784 18.618 - 18.735: 81.9649% ( 2) 00:28:36.784 18.735 - 18.851: 82.0694% ( 9) 00:28:36.784 18.851 - 18.967: 82.1624% ( 8) 00:28:36.784 18.967 - 19.084: 82.2436% ( 7) 00:28:36.784 19.084 - 19.200: 82.3133% ( 6) 00:28:36.784 19.200 - 19.316: 82.4062% ( 8) 00:28:36.784 19.316 - 19.433: 82.4991% ( 8) 00:28:36.784 19.433 - 19.549: 82.5688% ( 6) 00:28:36.784 19.549 - 19.665: 82.6966% ( 11) 00:28:36.784 19.665 - 19.782: 82.8011% ( 9) 00:28:36.784 19.782 - 19.898: 82.8940% ( 8) 00:28:36.784 19.898 - 20.015: 83.0449% ( 13) 00:28:36.784 20.015 - 20.131: 83.1611% ( 10) 00:28:36.784 20.131 - 20.247: 83.3237% ( 14) 00:28:36.784 20.247 - 20.364: 83.4398% ( 10) 00:28:36.784 20.364 - 20.480: 83.5675% ( 11) 00:28:36.784 20.480 - 20.596: 83.6372% ( 6) 00:28:36.784 20.596 - 20.713: 83.7069% ( 6) 00:28:36.784 20.713 - 20.829: 83.7766% ( 6) 00:28:36.784 20.829 - 20.945: 83.9043% ( 11) 00:28:36.784 20.945 - 21.062: 83.9856% ( 7) 00:28:36.784 21.062 - 21.178: 84.1133% ( 11) 00:28:36.784 21.178 - 21.295: 84.1946% ( 7) 00:28:36.784 21.295 - 21.411: 84.2643% ( 6) 00:28:36.784 21.411 - 21.527: 84.4037% ( 12) 00:28:36.784 21.527 - 21.644: 84.4617% ( 5) 00:28:36.784 21.644 - 21.760: 84.5198% ( 5) 00:28:36.784 21.760 - 21.876: 84.6359% ( 10) 00:28:36.784 21.876 - 21.993: 84.6592% ( 2) 00:28:36.784 21.993 - 22.109: 84.7521% ( 8) 00:28:36.784 22.109 - 22.225: 84.8101% ( 5) 00:28:36.784 22.225 - 22.342: 84.8450% ( 3) 00:28:36.784 22.342 - 22.458: 84.8682% ( 2) 00:28:36.784 22.458 - 22.575: 84.8914% ( 2) 00:28:36.784 22.575 - 22.691: 84.9146% ( 2) 00:28:36.784 22.691 - 22.807: 84.9495% ( 3) 00:28:36.784 22.807 - 22.924: 84.9727% ( 2) 00:28:36.784 22.924 - 23.040: 84.9959% ( 2) 00:28:36.784 23.040 - 23.156: 85.0192% ( 2) 00:28:36.784 23.156 - 23.273: 85.1005% ( 7) 00:28:36.784 23.273 - 23.389: 85.1469% ( 4) 00:28:36.784 23.389 - 23.505: 85.1701% ( 2) 00:28:36.784 23.505 - 23.622: 85.2166% ( 4) 00:28:36.784 23.622 - 23.738: 85.2282% ( 1) 00:28:36.784 23.738 - 23.855: 85.2514% ( 2) 00:28:36.784 23.855 - 23.971: 85.2979% ( 4) 00:28:36.784 24.087 - 24.204: 85.3327% ( 3) 00:28:36.784 24.320 - 24.436: 85.3559% ( 2) 00:28:36.784 24.436 - 24.553: 85.3792% ( 2) 00:28:36.784 24.553 - 24.669: 85.3908% ( 1) 00:28:36.784 24.669 - 24.785: 85.4256% ( 3) 00:28:36.784 24.785 - 24.902: 85.4488% ( 2) 00:28:36.784 24.902 - 25.018: 85.4837% ( 3) 00:28:36.784 25.018 - 25.135: 85.4953% ( 1) 00:28:36.784 25.135 - 25.251: 85.5069% ( 1) 00:28:36.784 25.251 - 25.367: 85.5185% ( 1) 00:28:36.784 25.367 - 25.484: 85.6230% ( 9) 00:28:36.784 25.484 - 25.600: 85.6695% ( 4) 00:28:36.784 25.600 - 25.716: 85.7159% ( 4) 00:28:36.784 25.716 - 25.833: 85.7508% ( 3) 00:28:36.784 25.833 - 25.949: 85.7740% ( 2) 00:28:36.784 25.949 - 26.065: 85.8088% ( 3) 00:28:36.784 26.182 - 26.298: 85.8205% ( 1) 00:28:36.784 26.415 - 26.531: 85.8321% ( 1) 00:28:36.784 26.531 - 26.647: 85.8553% ( 2) 00:28:36.784 26.647 - 26.764: 85.8669% ( 1) 00:28:36.784 26.764 - 26.880: 85.8901% ( 2) 00:28:36.784 26.880 - 26.996: 85.9018% ( 1) 00:28:36.784 26.996 - 27.113: 85.9134% ( 1) 00:28:36.784 27.113 - 27.229: 85.9250% ( 1) 00:28:36.784 27.229 - 27.345: 85.9366% ( 1) 00:28:36.784 27.345 - 27.462: 85.9482% ( 1) 00:28:36.784 27.462 - 27.578: 85.9714% ( 2) 00:28:36.784 27.811 - 27.927: 85.9830% ( 1) 00:28:36.784 27.927 - 28.044: 86.0063% ( 2) 00:28:36.784 28.044 - 28.160: 86.0527% ( 4) 00:28:36.784 28.160 - 28.276: 86.2966% ( 21) 00:28:36.784 28.276 - 28.393: 86.6798% ( 33) 00:28:36.784 28.393 - 28.509: 87.3766% ( 60) 00:28:36.784 28.509 - 28.625: 88.4566% ( 93) 00:28:36.784 28.625 - 28.742: 89.8386% ( 119) 00:28:36.784 28.742 - 28.858: 91.3831% ( 133) 00:28:36.784 28.858 - 28.975: 92.8464% ( 126) 00:28:36.784 28.975 - 29.091: 94.0773% ( 106) 00:28:36.784 29.091 - 29.207: 95.0296% ( 82) 00:28:36.784 29.207 - 29.324: 95.6683% ( 55) 00:28:36.784 29.324 - 29.440: 96.0980% ( 37) 00:28:36.784 29.440 - 29.556: 96.5045% ( 35) 00:28:36.784 29.556 - 29.673: 96.8645% ( 31) 00:28:36.784 29.673 - 29.789: 97.1664% ( 26) 00:28:36.785 29.789 - 30.022: 97.6193% ( 39) 00:28:36.785 30.022 - 30.255: 97.8980% ( 24) 00:28:36.785 30.255 - 30.487: 98.0142% ( 10) 00:28:36.785 30.487 - 30.720: 98.1651% ( 13) 00:28:36.785 30.720 - 30.953: 98.2464% ( 7) 00:28:36.785 30.953 - 31.185: 98.2929% ( 4) 00:28:36.785 31.185 - 31.418: 98.3626% ( 6) 00:28:36.785 31.418 - 31.651: 98.4322% ( 6) 00:28:36.785 31.651 - 31.884: 98.4439% ( 1) 00:28:36.785 31.884 - 32.116: 98.4903% ( 4) 00:28:36.785 32.116 - 32.349: 98.5251% ( 3) 00:28:36.785 32.349 - 32.582: 98.5600% ( 3) 00:28:36.785 32.582 - 32.815: 98.5832% ( 2) 00:28:36.785 32.815 - 33.047: 98.6064% ( 2) 00:28:36.785 33.047 - 33.280: 98.6297% ( 2) 00:28:36.785 33.745 - 33.978: 98.6761% ( 4) 00:28:36.785 33.978 - 34.211: 98.7342% ( 5) 00:28:36.785 34.211 - 34.444: 98.7806% ( 4) 00:28:36.785 34.444 - 34.676: 98.8271% ( 4) 00:28:36.785 34.676 - 34.909: 98.8851% ( 5) 00:28:36.785 34.909 - 35.142: 98.9200% ( 3) 00:28:36.785 35.142 - 35.375: 98.9664% ( 4) 00:28:36.785 35.375 - 35.607: 99.0129% ( 4) 00:28:36.785 35.607 - 35.840: 99.0593% ( 4) 00:28:36.785 35.840 - 36.073: 99.1406% ( 7) 00:28:36.785 36.073 - 36.305: 99.1755% ( 3) 00:28:36.785 36.305 - 36.538: 99.2103% ( 3) 00:28:36.785 36.771 - 37.004: 99.2219% ( 1) 00:28:36.785 37.004 - 37.236: 99.2452% ( 2) 00:28:36.785 37.236 - 37.469: 99.2568% ( 1) 00:28:36.785 37.469 - 37.702: 99.2684% ( 1) 00:28:36.785 38.167 - 38.400: 99.2916% ( 2) 00:28:36.785 38.633 - 38.865: 99.3032% ( 1) 00:28:36.785 39.331 - 39.564: 99.3148% ( 1) 00:28:36.785 39.564 - 39.796: 99.3264% ( 1) 00:28:36.785 39.796 - 40.029: 99.3381% ( 1) 00:28:36.785 40.029 - 40.262: 99.3497% ( 1) 00:28:36.785 40.262 - 40.495: 99.3613% ( 1) 00:28:36.785 40.495 - 40.727: 99.3729% ( 1) 00:28:36.785 40.727 - 40.960: 99.3845% ( 1) 00:28:36.785 40.960 - 41.193: 99.4310% ( 4) 00:28:36.785 41.425 - 41.658: 99.4426% ( 1) 00:28:36.785 42.124 - 42.356: 99.4542% ( 1) 00:28:36.785 42.356 - 42.589: 99.4658% ( 1) 00:28:36.785 42.822 - 43.055: 99.4774% ( 1) 00:28:36.785 43.055 - 43.287: 99.5006% ( 2) 00:28:36.785 43.520 - 43.753: 99.5123% ( 1) 00:28:36.785 43.753 - 43.985: 99.5355% ( 2) 00:28:36.785 43.985 - 44.218: 99.6284% ( 8) 00:28:36.785 44.218 - 44.451: 99.6864% ( 5) 00:28:36.785 44.451 - 44.684: 99.6981% ( 1) 00:28:36.785 44.684 - 44.916: 99.7561% ( 5) 00:28:36.785 44.916 - 45.149: 99.8026% ( 4) 00:28:36.785 45.149 - 45.382: 99.8374% ( 3) 00:28:36.785 46.313 - 46.545: 99.8490% ( 1) 00:28:36.785 46.545 - 46.778: 99.8606% ( 1) 00:28:36.785 51.200 - 51.433: 99.8723% ( 1) 00:28:36.785 51.898 - 52.131: 99.8839% ( 1) 00:28:36.785 52.131 - 52.364: 99.8955% ( 1) 00:28:36.785 52.364 - 52.596: 99.9071% ( 1) 00:28:36.785 52.829 - 53.062: 99.9187% ( 1) 00:28:36.785 53.527 - 53.760: 99.9303% ( 1) 00:28:36.785 54.225 - 54.458: 99.9419% ( 1) 00:28:36.785 56.785 - 57.018: 99.9535% ( 1) 00:28:36.785 57.018 - 57.251: 99.9652% ( 1) 00:28:36.785 68.887 - 69.353: 99.9768% ( 1) 00:28:36.785 92.625 - 93.091: 99.9884% ( 1) 00:28:36.785 117.295 - 117.760: 100.0000% ( 1) 00:28:36.785 00:28:36.785 Complete histogram 00:28:36.785 ================== 00:28:36.785 Range in us Cumulative Count 00:28:36.785 8.727 - 8.785: 0.0116% ( 1) 00:28:36.785 8.785 - 8.844: 0.0348% ( 2) 00:28:36.785 8.844 - 8.902: 0.1277% ( 8) 00:28:36.785 8.902 - 8.960: 0.5574% ( 37) 00:28:36.785 8.960 - 9.018: 1.9162% ( 117) 00:28:36.785 9.018 - 9.076: 3.8323% ( 165) 00:28:36.785 9.076 - 9.135: 5.8994% ( 178) 00:28:36.785 9.135 - 9.193: 7.6995% ( 155) 00:28:36.785 9.193 - 9.251: 10.7885% ( 266) 00:28:36.785 9.251 - 9.309: 14.5744% ( 326) 00:28:36.785 9.309 - 9.367: 18.9757% ( 379) 00:28:36.785 9.367 - 9.425: 23.1332% ( 358) 00:28:36.785 9.425 - 9.484: 27.9642% ( 416) 00:28:36.785 9.484 - 9.542: 33.4572% ( 473) 00:28:36.785 9.542 - 9.600: 40.0070% ( 564) 00:28:36.785 9.600 - 9.658: 46.0109% ( 517) 00:28:36.785 9.658 - 9.716: 49.8897% ( 334) 00:28:36.785 9.716 - 9.775: 52.8046% ( 251) 00:28:36.785 9.775 - 9.833: 55.4988% ( 232) 00:28:36.785 9.833 - 9.891: 57.6472% ( 185) 00:28:36.785 9.891 - 9.949: 59.5750% ( 166) 00:28:36.785 9.949 - 10.007: 60.7479% ( 101) 00:28:36.785 10.007 - 10.065: 61.8047% ( 91) 00:28:36.785 10.065 - 10.124: 62.9427% ( 98) 00:28:36.785 10.124 - 10.182: 64.1505% ( 104) 00:28:36.785 10.182 - 10.240: 65.6834% ( 132) 00:28:36.785 10.240 - 10.298: 67.2976% ( 139) 00:28:36.785 10.298 - 10.356: 69.3067% ( 173) 00:28:36.785 10.356 - 10.415: 70.6887% ( 119) 00:28:36.785 10.415 - 10.473: 71.9429% ( 108) 00:28:36.785 10.473 - 10.531: 73.1855% ( 107) 00:28:36.785 10.531 - 10.589: 74.0448% ( 74) 00:28:36.785 10.589 - 10.647: 74.7068% ( 57) 00:28:36.785 10.647 - 10.705: 75.2874% ( 50) 00:28:36.785 10.705 - 10.764: 75.7287% ( 38) 00:28:36.785 10.764 - 10.822: 76.1003% ( 32) 00:28:36.785 10.822 - 10.880: 76.2978% ( 17) 00:28:36.785 10.880 - 10.938: 76.4603% ( 14) 00:28:36.785 10.938 - 10.996: 76.5997% ( 12) 00:28:36.785 10.996 - 11.055: 76.7507% ( 13) 00:28:36.785 11.055 - 11.113: 76.8668% ( 10) 00:28:36.785 11.113 - 11.171: 77.0294% ( 14) 00:28:36.785 11.171 - 11.229: 77.3313% ( 26) 00:28:36.785 11.229 - 11.287: 77.5752% ( 21) 00:28:36.785 11.287 - 11.345: 77.7610% ( 16) 00:28:36.785 11.345 - 11.404: 78.0862% ( 28) 00:28:36.785 11.404 - 11.462: 78.4113% ( 28) 00:28:36.785 11.462 - 11.520: 78.6900% ( 24) 00:28:36.785 11.520 - 11.578: 78.9223% ( 20) 00:28:36.785 11.578 - 11.636: 79.1081% ( 16) 00:28:36.785 11.636 - 11.695: 79.2242% ( 10) 00:28:36.785 11.695 - 11.753: 79.2707% ( 4) 00:28:36.785 11.753 - 11.811: 79.3752% ( 9) 00:28:36.785 11.811 - 11.869: 79.5146% ( 12) 00:28:36.785 11.869 - 11.927: 79.6307% ( 10) 00:28:36.785 11.927 - 11.985: 79.7004% ( 6) 00:28:36.785 11.985 - 12.044: 79.7933% ( 8) 00:28:36.785 12.044 - 12.102: 79.9210% ( 11) 00:28:36.785 12.102 - 12.160: 79.9907% ( 6) 00:28:36.785 12.160 - 12.218: 80.0604% ( 6) 00:28:36.785 12.218 - 12.276: 80.0952% ( 3) 00:28:36.785 12.276 - 12.335: 80.2114% ( 10) 00:28:36.785 12.335 - 12.393: 80.2230% ( 1) 00:28:36.785 12.393 - 12.451: 80.2462% ( 2) 00:28:36.785 12.451 - 12.509: 80.3043% ( 5) 00:28:36.785 12.509 - 12.567: 80.3623% ( 5) 00:28:36.785 12.567 - 12.625: 80.3739% ( 1) 00:28:36.785 12.625 - 12.684: 80.4088% ( 3) 00:28:36.785 12.684 - 12.742: 80.4552% ( 4) 00:28:36.785 12.742 - 12.800: 80.5017% ( 4) 00:28:36.786 12.800 - 12.858: 80.5365% ( 3) 00:28:36.786 12.858 - 12.916: 80.5481% ( 1) 00:28:36.786 12.916 - 12.975: 80.5714% ( 2) 00:28:36.786 12.975 - 13.033: 80.6178% ( 4) 00:28:36.786 13.033 - 13.091: 80.6410% ( 2) 00:28:36.786 13.091 - 13.149: 80.6643% ( 2) 00:28:36.786 13.149 - 13.207: 80.6991% ( 3) 00:28:36.786 13.207 - 13.265: 80.7223% ( 2) 00:28:36.786 13.265 - 13.324: 80.7456% ( 2) 00:28:36.786 13.324 - 13.382: 80.7688% ( 2) 00:28:36.786 13.382 - 13.440: 80.7804% ( 1) 00:28:36.786 13.440 - 13.498: 80.7920% ( 1) 00:28:36.786 13.498 - 13.556: 80.8501% ( 5) 00:28:36.786 13.556 - 13.615: 80.8617% ( 1) 00:28:36.786 13.615 - 13.673: 80.8733% ( 1) 00:28:36.786 13.673 - 13.731: 80.8849% ( 1) 00:28:36.786 13.731 - 13.789: 80.8965% ( 1) 00:28:36.786 13.789 - 13.847: 80.9081% ( 1) 00:28:36.786 13.847 - 13.905: 80.9198% ( 1) 00:28:36.786 14.022 - 14.080: 80.9314% ( 1) 00:28:36.786 14.080 - 14.138: 80.9430% ( 1) 00:28:36.786 14.138 - 14.196: 80.9662% ( 2) 00:28:36.786 14.255 - 14.313: 80.9778% ( 1) 00:28:36.786 14.313 - 14.371: 81.0127% ( 3) 00:28:36.786 14.371 - 14.429: 81.0243% ( 1) 00:28:36.786 14.429 - 14.487: 81.0359% ( 1) 00:28:36.786 14.487 - 14.545: 81.0475% ( 1) 00:28:36.786 14.545 - 14.604: 81.0591% ( 1) 00:28:36.786 14.604 - 14.662: 81.0707% ( 1) 00:28:36.786 14.662 - 14.720: 81.0823% ( 1) 00:28:36.786 14.895 - 15.011: 81.1056% ( 2) 00:28:36.786 15.011 - 15.127: 81.1404% ( 3) 00:28:36.786 15.127 - 15.244: 81.1752% ( 3) 00:28:36.786 15.244 - 15.360: 81.2217% ( 4) 00:28:36.786 15.360 - 15.476: 81.2681% ( 4) 00:28:36.786 15.476 - 15.593: 81.3843% ( 10) 00:28:36.786 15.593 - 15.709: 81.5352% ( 13) 00:28:36.786 15.709 - 15.825: 81.6282% ( 8) 00:28:36.786 15.825 - 15.942: 81.8140% ( 16) 00:28:36.786 15.942 - 16.058: 81.9185% ( 9) 00:28:36.786 16.058 - 16.175: 82.0811% ( 14) 00:28:36.786 16.175 - 16.291: 82.1740% ( 8) 00:28:36.786 16.291 - 16.407: 82.3482% ( 15) 00:28:36.786 16.407 - 16.524: 82.4411% ( 8) 00:28:36.786 16.524 - 16.640: 82.5107% ( 6) 00:28:36.786 16.640 - 16.756: 82.6385% ( 11) 00:28:36.786 16.756 - 16.873: 82.7546% ( 10) 00:28:36.786 16.873 - 16.989: 82.8475% ( 8) 00:28:36.786 16.989 - 17.105: 82.9985% ( 13) 00:28:36.786 17.105 - 17.222: 83.0682% ( 6) 00:28:36.786 17.222 - 17.338: 83.1495% ( 7) 00:28:36.786 17.338 - 17.455: 83.2424% ( 8) 00:28:36.786 17.455 - 17.571: 83.3120% ( 6) 00:28:36.786 17.571 - 17.687: 83.4514% ( 12) 00:28:36.786 17.687 - 17.804: 83.5211% ( 6) 00:28:36.786 17.804 - 17.920: 83.6024% ( 7) 00:28:36.786 17.920 - 18.036: 83.6372% ( 3) 00:28:36.786 18.036 - 18.153: 83.7069% ( 6) 00:28:36.786 18.153 - 18.269: 83.7417% ( 3) 00:28:36.786 18.269 - 18.385: 83.7882% ( 4) 00:28:36.786 18.385 - 18.502: 83.8579% ( 6) 00:28:36.786 18.502 - 18.618: 83.8695% ( 1) 00:28:36.786 18.618 - 18.735: 83.9159% ( 4) 00:28:36.786 18.735 - 18.851: 83.9391% ( 2) 00:28:36.786 18.851 - 18.967: 83.9624% ( 2) 00:28:36.786 18.967 - 19.084: 83.9740% ( 1) 00:28:36.786 19.200 - 19.316: 83.9856% ( 1) 00:28:36.786 19.316 - 19.433: 84.0088% ( 2) 00:28:36.786 19.433 - 19.549: 84.0321% ( 2) 00:28:36.786 19.549 - 19.665: 84.0553% ( 2) 00:28:36.786 19.782 - 19.898: 84.1133% ( 5) 00:28:36.786 19.898 - 20.015: 84.1366% ( 2) 00:28:36.786 20.015 - 20.131: 84.1598% ( 2) 00:28:36.786 20.247 - 20.364: 84.1714% ( 1) 00:28:36.786 20.364 - 20.480: 84.1946% ( 2) 00:28:36.786 20.480 - 20.596: 84.2527% ( 5) 00:28:36.786 20.596 - 20.713: 84.3108% ( 5) 00:28:36.786 20.713 - 20.829: 84.3572% ( 4) 00:28:36.786 20.829 - 20.945: 84.4037% ( 4) 00:28:36.786 20.945 - 21.062: 84.4385% ( 3) 00:28:36.786 21.062 - 21.178: 84.4617% ( 2) 00:28:36.786 21.178 - 21.295: 84.4850% ( 2) 00:28:36.786 21.295 - 21.411: 84.5314% ( 4) 00:28:36.786 21.411 - 21.527: 84.5546% ( 2) 00:28:36.786 21.527 - 21.644: 84.5663% ( 1) 00:28:36.786 21.876 - 21.993: 84.5779% ( 1) 00:28:36.786 22.342 - 22.458: 84.5895% ( 1) 00:28:36.786 22.458 - 22.575: 84.6011% ( 1) 00:28:36.786 23.273 - 23.389: 84.6127% ( 1) 00:28:36.786 23.389 - 23.505: 84.6359% ( 2) 00:28:36.786 23.505 - 23.622: 84.7172% ( 7) 00:28:36.786 23.622 - 23.738: 84.9843% ( 23) 00:28:36.786 23.738 - 23.855: 85.5069% ( 45) 00:28:36.786 23.855 - 23.971: 86.0992% ( 51) 00:28:36.786 23.971 - 24.087: 87.3302% ( 106) 00:28:36.786 24.087 - 24.204: 89.0489% ( 148) 00:28:36.786 24.204 - 24.320: 90.8373% ( 154) 00:28:36.786 24.320 - 24.436: 92.5793% ( 150) 00:28:36.786 24.436 - 24.553: 93.8335% ( 108) 00:28:36.786 24.553 - 24.669: 94.8206% ( 85) 00:28:36.786 24.669 - 24.785: 95.4245% ( 52) 00:28:36.786 24.785 - 24.902: 95.9354% ( 44) 00:28:36.786 24.902 - 25.018: 96.4116% ( 41) 00:28:36.786 25.018 - 25.135: 96.7019% ( 25) 00:28:36.786 25.135 - 25.251: 97.1083% ( 35) 00:28:36.786 25.251 - 25.367: 97.3638% ( 22) 00:28:36.786 25.367 - 25.484: 97.5729% ( 18) 00:28:36.786 25.484 - 25.600: 97.7122% ( 12) 00:28:36.786 25.600 - 25.716: 97.8167% ( 9) 00:28:36.786 25.716 - 25.833: 97.8748% ( 5) 00:28:36.786 25.833 - 25.949: 97.9329% ( 5) 00:28:36.786 25.949 - 26.065: 97.9793% ( 4) 00:28:36.786 26.065 - 26.182: 98.0258% ( 4) 00:28:36.786 26.182 - 26.298: 98.0490% ( 2) 00:28:36.786 26.298 - 26.415: 98.0955% ( 4) 00:28:36.786 26.415 - 26.531: 98.1187% ( 2) 00:28:36.786 26.531 - 26.647: 98.1651% ( 4) 00:28:36.786 26.647 - 26.764: 98.1768% ( 1) 00:28:36.786 26.764 - 26.880: 98.2116% ( 3) 00:28:36.786 26.880 - 26.996: 98.2348% ( 2) 00:28:36.786 26.996 - 27.113: 98.2697% ( 3) 00:28:36.786 27.113 - 27.229: 98.2813% ( 1) 00:28:36.786 27.229 - 27.345: 98.2929% ( 1) 00:28:36.786 27.345 - 27.462: 98.3045% ( 1) 00:28:36.786 27.462 - 27.578: 98.3277% ( 2) 00:28:36.786 27.578 - 27.695: 98.3742% ( 4) 00:28:36.786 27.695 - 27.811: 98.3858% ( 1) 00:28:36.786 27.811 - 27.927: 98.3974% ( 1) 00:28:36.786 28.160 - 28.276: 98.4090% ( 1) 00:28:36.786 28.509 - 28.625: 98.4206% ( 1) 00:28:36.786 28.742 - 28.858: 98.4322% ( 1) 00:28:36.786 29.324 - 29.440: 98.4439% ( 1) 00:28:36.786 29.440 - 29.556: 98.4671% ( 2) 00:28:36.786 29.673 - 29.789: 98.4903% ( 2) 00:28:36.786 29.789 - 30.022: 98.5251% ( 3) 00:28:36.786 30.022 - 30.255: 98.6180% ( 8) 00:28:36.786 30.255 - 30.487: 98.7690% ( 13) 00:28:36.787 30.487 - 30.720: 98.8851% ( 10) 00:28:36.787 30.720 - 30.953: 98.9897% ( 9) 00:28:36.787 30.953 - 31.185: 99.0477% ( 5) 00:28:36.787 31.185 - 31.418: 99.1058% ( 5) 00:28:36.787 31.418 - 31.651: 99.1522% ( 4) 00:28:36.787 31.651 - 31.884: 99.1871% ( 3) 00:28:36.787 31.884 - 32.116: 99.2335% ( 4) 00:28:36.787 32.116 - 32.349: 99.2568% ( 2) 00:28:36.787 32.349 - 32.582: 99.3032% ( 4) 00:28:36.787 32.582 - 32.815: 99.3381% ( 3) 00:28:36.787 32.815 - 33.047: 99.3497% ( 1) 00:28:36.787 33.047 - 33.280: 99.3961% ( 4) 00:28:36.787 33.280 - 33.513: 99.4193% ( 2) 00:28:36.787 33.513 - 33.745: 99.4310% ( 1) 00:28:36.787 33.745 - 33.978: 99.4542% ( 2) 00:28:36.787 33.978 - 34.211: 99.4658% ( 1) 00:28:36.787 34.676 - 34.909: 99.4774% ( 1) 00:28:36.787 35.375 - 35.607: 99.5006% ( 2) 00:28:36.787 36.538 - 36.771: 99.5123% ( 1) 00:28:36.787 36.771 - 37.004: 99.5239% ( 1) 00:28:36.787 37.236 - 37.469: 99.5355% ( 1) 00:28:36.787 37.469 - 37.702: 99.5471% ( 1) 00:28:36.787 37.935 - 38.167: 99.5587% ( 1) 00:28:36.787 38.167 - 38.400: 99.5703% ( 1) 00:28:36.787 38.633 - 38.865: 99.5819% ( 1) 00:28:36.787 39.098 - 39.331: 99.5935% ( 1) 00:28:36.787 39.331 - 39.564: 99.6284% ( 3) 00:28:36.787 39.564 - 39.796: 99.6864% ( 5) 00:28:36.787 39.796 - 40.029: 99.7097% ( 2) 00:28:36.787 40.262 - 40.495: 99.7213% ( 1) 00:28:36.787 40.495 - 40.727: 99.7445% ( 2) 00:28:36.787 40.960 - 41.193: 99.7794% ( 3) 00:28:36.787 41.193 - 41.425: 99.7910% ( 1) 00:28:36.787 41.425 - 41.658: 99.8142% ( 2) 00:28:36.787 41.658 - 41.891: 99.8258% ( 1) 00:28:36.787 41.891 - 42.124: 99.8374% ( 1) 00:28:36.787 42.356 - 42.589: 99.8490% ( 1) 00:28:36.787 43.520 - 43.753: 99.8723% ( 2) 00:28:36.787 44.916 - 45.149: 99.8839% ( 1) 00:28:36.787 45.149 - 45.382: 99.8955% ( 1) 00:28:36.787 45.615 - 45.847: 99.9071% ( 1) 00:28:36.787 45.847 - 46.080: 99.9187% ( 1) 00:28:36.787 47.011 - 47.244: 99.9303% ( 1) 00:28:36.787 47.476 - 47.709: 99.9419% ( 1) 00:28:36.787 47.942 - 48.175: 99.9535% ( 1) 00:28:36.787 48.407 - 48.640: 99.9652% ( 1) 00:28:36.787 56.320 - 56.553: 99.9768% ( 1) 00:28:36.787 57.716 - 57.949: 99.9884% ( 1) 00:28:36.787 61.440 - 61.905: 100.0000% ( 1) 00:28:36.787 00:28:36.787 00:28:36.787 real 0m1.323s 00:28:36.787 user 0m1.130s 00:28:36.787 sys 0m0.147s 00:28:36.787 05:09:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:36.787 05:09:00 -- common/autotest_common.sh@10 -- # set +x 00:28:36.787 ************************************ 00:28:36.787 END TEST nvme_overhead 00:28:36.787 ************************************ 00:28:36.787 05:09:00 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:28:36.787 05:09:00 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:28:36.787 05:09:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:36.787 05:09:00 -- common/autotest_common.sh@10 -- # set +x 00:28:36.787 ************************************ 00:28:36.787 START TEST nvme_arbitration 00:28:36.787 ************************************ 00:28:36.787 05:09:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:28:40.072 Initializing NVMe Controllers 00:28:40.072 Attached to 0000:00:06.0 00:28:40.072 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:28:40.072 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:28:40.072 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:28:40.072 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:28:40.072 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:28:40.072 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:28:40.072 Initialization complete. Launching workers. 00:28:40.072 Starting thread on core 1 with urgent priority queue 00:28:40.072 Starting thread on core 2 with urgent priority queue 00:28:40.072 Starting thread on core 3 with urgent priority queue 00:28:40.072 Starting thread on core 0 with urgent priority queue 00:28:40.072 QEMU NVMe Ctrl (12340 ) core 0: 1514.67 IO/s 66.02 secs/100000 ios 00:28:40.072 QEMU NVMe Ctrl (12340 ) core 1: 1344.00 IO/s 74.40 secs/100000 ios 00:28:40.072 QEMU NVMe Ctrl (12340 ) core 2: 661.33 IO/s 151.21 secs/100000 ios 00:28:40.072 QEMU NVMe Ctrl (12340 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:28:40.072 ======================================================== 00:28:40.072 00:28:40.072 00:28:40.072 real 0m3.438s 00:28:40.072 user 0m9.432s 00:28:40.072 sys 0m0.155s 00:28:40.072 05:09:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:40.072 05:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:40.072 ************************************ 00:28:40.072 END TEST nvme_arbitration 00:28:40.072 ************************************ 00:28:40.331 05:09:03 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:28:40.331 05:09:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:28:40.331 05:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:40.331 05:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:40.331 ************************************ 00:28:40.331 START TEST nvme_single_aen 00:28:40.331 ************************************ 00:28:40.331 05:09:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:28:40.331 [2024-11-18 05:09:03.638786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:40.331 [2024-11-18 05:09:03.638862] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.331 [2024-11-18 05:09:03.818585] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:40.331 Asynchronous Event Request test 00:28:40.331 Attached to 0000:00:06.0 00:28:40.331 Reset controller to setup AER completions for this process 00:28:40.331 Registering asynchronous event callbacks... 00:28:40.331 Getting orig temperature thresholds of all controllers 00:28:40.331 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:40.331 Setting all controllers temperature threshold low to trigger AER 00:28:40.331 Waiting for all controllers temperature threshold to be set lower 00:28:40.331 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:40.331 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:28:40.331 Waiting for all controllers to trigger AER and reset threshold 00:28:40.331 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:40.331 Cleaning up... 00:28:40.591 00:28:40.591 real 0m0.254s 00:28:40.591 user 0m0.081s 00:28:40.591 sys 0m0.130s 00:28:40.591 05:09:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:40.591 05:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:40.591 ************************************ 00:28:40.591 END TEST nvme_single_aen 00:28:40.591 ************************************ 00:28:40.591 05:09:03 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:28:40.591 05:09:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:40.591 05:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:40.591 05:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:40.591 ************************************ 00:28:40.591 START TEST nvme_doorbell_aers 00:28:40.591 ************************************ 00:28:40.591 05:09:03 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:28:40.591 05:09:03 -- nvme/nvme.sh@70 -- # bdfs=() 00:28:40.591 05:09:03 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:28:40.591 05:09:03 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:28:40.591 05:09:03 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:28:40.591 05:09:03 -- common/autotest_common.sh@1508 -- # bdfs=() 00:28:40.591 05:09:03 -- common/autotest_common.sh@1508 -- # local bdfs 00:28:40.591 05:09:03 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:40.591 05:09:03 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:40.591 05:09:03 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:28:40.591 05:09:03 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:28:40.591 05:09:03 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:28:40.591 05:09:03 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:28:40.591 05:09:03 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:28:40.850 [2024-11-18 05:09:04.228110] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93319) is not found. Dropping the request. 00:28:50.864 Executing: test_write_invalid_db 00:28:50.864 Waiting for AER completion... 00:28:50.864 Failure: test_write_invalid_db 00:28:50.864 00:28:50.864 Executing: test_invalid_db_write_overflow_sq 00:28:50.864 Waiting for AER completion... 00:28:50.864 Failure: test_invalid_db_write_overflow_sq 00:28:50.864 00:28:50.864 Executing: test_invalid_db_write_overflow_cq 00:28:50.864 Waiting for AER completion... 00:28:50.864 Failure: test_invalid_db_write_overflow_cq 00:28:50.864 00:28:50.864 ************************************ 00:28:50.864 END TEST nvme_doorbell_aers 00:28:50.864 ************************************ 00:28:50.864 00:28:50.864 real 0m10.095s 00:28:50.864 user 0m8.647s 00:28:50.864 sys 0m1.383s 00:28:50.864 05:09:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:50.864 05:09:14 -- common/autotest_common.sh@10 -- # set +x 00:28:50.864 05:09:14 -- nvme/nvme.sh@97 -- # uname 00:28:50.864 05:09:14 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:28:50.864 05:09:14 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:28:50.864 05:09:14 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:28:50.864 05:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:50.864 05:09:14 -- common/autotest_common.sh@10 -- # set +x 00:28:50.864 ************************************ 00:28:50.864 START TEST nvme_multi_aen 00:28:50.864 ************************************ 00:28:50.864 05:09:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:28:50.864 [2024-11-18 05:09:14.107587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:50.864 [2024-11-18 05:09:14.107708] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.864 [2024-11-18 05:09:14.328011] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:50.864 [2024-11-18 05:09:14.328085] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93319) is not found. Dropping the request. 00:28:50.865 [2024-11-18 05:09:14.328122] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93319) is not found. Dropping the request. 00:28:50.865 [2024-11-18 05:09:14.328140] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93319) is not found. Dropping the request. 00:28:50.865 [2024-11-18 05:09:14.338226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:50.865 [2024-11-18 05:09:14.338459] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.865 Child process pid: 93499 00:28:51.431 [Child] Asynchronous Event Request test 00:28:51.431 [Child] Attached to 0000:00:06.0 00:28:51.431 [Child] Registering asynchronous event callbacks... 00:28:51.431 [Child] Getting orig temperature thresholds of all controllers 00:28:51.431 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:51.431 [Child] Waiting for all controllers to trigger AER and reset threshold 00:28:51.431 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:51.431 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:51.431 [Child] Cleaning up... 00:28:51.431 Asynchronous Event Request test 00:28:51.431 Attached to 0000:00:06.0 00:28:51.431 Reset controller to setup AER completions for this process 00:28:51.431 Registering asynchronous event callbacks... 00:28:51.432 Getting orig temperature thresholds of all controllers 00:28:51.432 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:51.432 Setting all controllers temperature threshold low to trigger AER 00:28:51.432 Waiting for all controllers temperature threshold to be set lower 00:28:51.432 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:51.432 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:28:51.432 Waiting for all controllers to trigger AER and reset threshold 00:28:51.432 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:51.432 Cleaning up... 00:28:51.432 ************************************ 00:28:51.432 END TEST nvme_multi_aen 00:28:51.432 ************************************ 00:28:51.432 00:28:51.432 real 0m0.631s 00:28:51.432 user 0m0.230s 00:28:51.432 sys 0m0.292s 00:28:51.432 05:09:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:51.432 05:09:14 -- common/autotest_common.sh@10 -- # set +x 00:28:51.432 05:09:14 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:28:51.432 05:09:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:28:51.432 05:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:51.432 05:09:14 -- common/autotest_common.sh@10 -- # set +x 00:28:51.432 ************************************ 00:28:51.432 START TEST nvme_startup 00:28:51.432 ************************************ 00:28:51.432 05:09:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:28:51.690 Initializing NVMe Controllers 00:28:51.690 Attached to 0000:00:06.0 00:28:51.690 Initialization complete. 00:28:51.690 Time used:227383.266 (us). 00:28:51.690 ************************************ 00:28:51.690 END TEST nvme_startup 00:28:51.690 ************************************ 00:28:51.690 00:28:51.690 real 0m0.301s 00:28:51.690 user 0m0.091s 00:28:51.690 sys 0m0.162s 00:28:51.690 05:09:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:51.690 05:09:15 -- common/autotest_common.sh@10 -- # set +x 00:28:51.691 05:09:15 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:28:51.691 05:09:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:51.691 05:09:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:51.691 05:09:15 -- common/autotest_common.sh@10 -- # set +x 00:28:51.691 ************************************ 00:28:51.691 START TEST nvme_multi_secondary 00:28:51.691 ************************************ 00:28:51.691 05:09:15 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:28:51.691 05:09:15 -- nvme/nvme.sh@52 -- # pid0=93555 00:28:51.691 05:09:15 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:28:51.691 05:09:15 -- nvme/nvme.sh@54 -- # pid1=93556 00:28:51.691 05:09:15 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:28:51.691 05:09:15 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:28:54.975 Initializing NVMe Controllers 00:28:54.976 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:54.976 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:28:54.976 Initialization complete. Launching workers. 00:28:54.976 ======================================================== 00:28:54.976 Latency(us) 00:28:54.976 Device Information : IOPS MiB/s Average min max 00:28:54.976 PCIE (0000:00:06.0) NSID 1 from core 1: 36506.65 142.60 437.90 122.72 1714.31 00:28:54.976 ======================================================== 00:28:54.976 Total : 36506.65 142.60 437.90 122.72 1714.31 00:28:54.976 00:28:55.234 Initializing NVMe Controllers 00:28:55.234 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:55.234 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:28:55.234 Initialization complete. Launching workers. 00:28:55.234 ======================================================== 00:28:55.234 Latency(us) 00:28:55.234 Device Information : IOPS MiB/s Average min max 00:28:55.234 PCIE (0000:00:06.0) NSID 1 from core 2: 15721.21 61.41 1016.79 152.75 7684.41 00:28:55.234 ======================================================== 00:28:55.234 Total : 15721.21 61.41 1016.79 152.75 7684.41 00:28:55.234 00:28:55.234 05:09:18 -- nvme/nvme.sh@56 -- # wait 93555 00:28:57.766 Initializing NVMe Controllers 00:28:57.766 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:57.766 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:57.766 Initialization complete. Launching workers. 00:28:57.766 ======================================================== 00:28:57.766 Latency(us) 00:28:57.766 Device Information : IOPS MiB/s Average min max 00:28:57.766 PCIE (0000:00:06.0) NSID 1 from core 0: 45844.16 179.08 348.67 126.69 1586.84 00:28:57.766 ======================================================== 00:28:57.766 Total : 45844.16 179.08 348.67 126.69 1586.84 00:28:57.766 00:28:57.766 05:09:20 -- nvme/nvme.sh@57 -- # wait 93556 00:28:57.766 05:09:20 -- nvme/nvme.sh@61 -- # pid0=93626 00:28:57.766 05:09:20 -- nvme/nvme.sh@63 -- # pid1=93627 00:28:57.766 05:09:20 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:28:57.766 05:09:20 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:28:57.766 05:09:20 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:29:01.052 Initializing NVMe Controllers 00:29:01.052 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:01.052 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:01.052 Initialization complete. Launching workers. 00:29:01.052 ======================================================== 00:29:01.052 Latency(us) 00:29:01.052 Device Information : IOPS MiB/s Average min max 00:29:01.052 PCIE (0000:00:06.0) NSID 1 from core 0: 35732.60 139.58 447.41 140.98 1274.33 00:29:01.052 ======================================================== 00:29:01.052 Total : 35732.60 139.58 447.41 140.98 1274.33 00:29:01.052 00:29:01.052 Initializing NVMe Controllers 00:29:01.052 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:01.052 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:29:01.052 Initialization complete. Launching workers. 00:29:01.052 ======================================================== 00:29:01.052 Latency(us) 00:29:01.052 Device Information : IOPS MiB/s Average min max 00:29:01.052 PCIE (0000:00:06.0) NSID 1 from core 1: 35134.99 137.25 454.99 97.92 1328.04 00:29:01.052 ======================================================== 00:29:01.052 Total : 35134.99 137.25 454.99 97.92 1328.04 00:29:01.052 00:29:02.956 Initializing NVMe Controllers 00:29:02.956 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:02.956 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:29:02.956 Initialization complete. Launching workers. 00:29:02.956 ======================================================== 00:29:02.956 Latency(us) 00:29:02.956 Device Information : IOPS MiB/s Average min max 00:29:02.956 PCIE (0000:00:06.0) NSID 1 from core 2: 18267.84 71.36 875.05 159.35 8818.16 00:29:02.956 ======================================================== 00:29:02.956 Total : 18267.84 71.36 875.05 159.35 8818.16 00:29:02.956 00:29:02.956 ************************************ 00:29:02.956 END TEST nvme_multi_secondary 00:29:02.956 ************************************ 00:29:02.956 05:09:26 -- nvme/nvme.sh@65 -- # wait 93626 00:29:02.956 05:09:26 -- nvme/nvme.sh@66 -- # wait 93627 00:29:02.956 00:29:02.956 real 0m10.940s 00:29:02.956 user 0m18.654s 00:29:02.956 sys 0m0.964s 00:29:02.956 05:09:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:02.956 05:09:26 -- common/autotest_common.sh@10 -- # set +x 00:29:02.956 05:09:26 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:29:02.956 05:09:26 -- nvme/nvme.sh@102 -- # kill_stub 00:29:02.956 05:09:26 -- common/autotest_common.sh@1075 -- # [[ -e /proc/92952 ]] 00:29:02.956 05:09:26 -- common/autotest_common.sh@1076 -- # kill 92952 00:29:02.956 05:09:26 -- common/autotest_common.sh@1077 -- # wait 92952 00:29:03.525 [2024-11-18 05:09:26.809632] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93498) is not found. Dropping the request. 00:29:03.525 [2024-11-18 05:09:26.809749] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93498) is not found. Dropping the request. 00:29:03.525 [2024-11-18 05:09:26.809811] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93498) is not found. Dropping the request. 00:29:03.525 [2024-11-18 05:09:26.809853] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93498) is not found. Dropping the request. 00:29:03.525 05:09:27 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:29:03.525 05:09:27 -- common/autotest_common.sh@1083 -- # echo 2 00:29:03.525 05:09:27 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:03.525 05:09:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:03.525 05:09:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:03.525 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:29:03.786 ************************************ 00:29:03.786 START TEST bdev_nvme_reset_stuck_adm_cmd 00:29:03.786 ************************************ 00:29:03.786 05:09:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:03.786 * Looking for test storage... 00:29:03.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:03.786 05:09:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:03.786 05:09:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:03.786 05:09:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:03.786 05:09:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:03.786 05:09:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:03.786 05:09:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:03.786 05:09:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:03.786 05:09:27 -- scripts/common.sh@335 -- # IFS=.-: 00:29:03.786 05:09:27 -- scripts/common.sh@335 -- # read -ra ver1 00:29:03.786 05:09:27 -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.786 05:09:27 -- scripts/common.sh@336 -- # read -ra ver2 00:29:03.786 05:09:27 -- scripts/common.sh@337 -- # local 'op=<' 00:29:03.786 05:09:27 -- scripts/common.sh@339 -- # ver1_l=2 00:29:03.786 05:09:27 -- scripts/common.sh@340 -- # ver2_l=1 00:29:03.786 05:09:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:03.786 05:09:27 -- scripts/common.sh@343 -- # case "$op" in 00:29:03.786 05:09:27 -- scripts/common.sh@344 -- # : 1 00:29:03.786 05:09:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:03.786 05:09:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.786 05:09:27 -- scripts/common.sh@364 -- # decimal 1 00:29:03.786 05:09:27 -- scripts/common.sh@352 -- # local d=1 00:29:03.786 05:09:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.786 05:09:27 -- scripts/common.sh@354 -- # echo 1 00:29:03.786 05:09:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:03.786 05:09:27 -- scripts/common.sh@365 -- # decimal 2 00:29:03.786 05:09:27 -- scripts/common.sh@352 -- # local d=2 00:29:03.786 05:09:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.786 05:09:27 -- scripts/common.sh@354 -- # echo 2 00:29:03.786 05:09:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:03.786 05:09:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:03.786 05:09:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:03.786 05:09:27 -- scripts/common.sh@367 -- # return 0 00:29:03.786 05:09:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.786 05:09:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.786 --rc genhtml_branch_coverage=1 00:29:03.786 --rc genhtml_function_coverage=1 00:29:03.786 --rc genhtml_legend=1 00:29:03.786 --rc geninfo_all_blocks=1 00:29:03.786 --rc geninfo_unexecuted_blocks=1 00:29:03.786 00:29:03.786 ' 00:29:03.786 05:09:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.786 --rc genhtml_branch_coverage=1 00:29:03.786 --rc genhtml_function_coverage=1 00:29:03.786 --rc genhtml_legend=1 00:29:03.786 --rc geninfo_all_blocks=1 00:29:03.786 --rc geninfo_unexecuted_blocks=1 00:29:03.786 00:29:03.786 ' 00:29:03.786 05:09:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.786 --rc genhtml_branch_coverage=1 00:29:03.786 --rc genhtml_function_coverage=1 00:29:03.786 --rc genhtml_legend=1 00:29:03.786 --rc geninfo_all_blocks=1 00:29:03.786 --rc geninfo_unexecuted_blocks=1 00:29:03.786 00:29:03.786 ' 00:29:03.786 05:09:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.786 --rc genhtml_branch_coverage=1 00:29:03.786 --rc genhtml_function_coverage=1 00:29:03.786 --rc genhtml_legend=1 00:29:03.786 --rc geninfo_all_blocks=1 00:29:03.786 --rc geninfo_unexecuted_blocks=1 00:29:03.786 00:29:03.786 ' 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:29:03.786 05:09:27 -- common/autotest_common.sh@1519 -- # bdfs=() 00:29:03.786 05:09:27 -- common/autotest_common.sh@1519 -- # local bdfs 00:29:03.786 05:09:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:03.786 05:09:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:03.786 05:09:27 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:03.786 05:09:27 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:03.786 05:09:27 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:03.786 05:09:27 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:03.786 05:09:27 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:03.786 05:09:27 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:29:03.786 05:09:27 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:29:03.786 05:09:27 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=93776 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:03.786 05:09:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 93776 00:29:03.787 05:09:27 -- common/autotest_common.sh@829 -- # '[' -z 93776 ']' 00:29:03.787 05:09:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.787 05:09:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:03.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.787 05:09:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.787 05:09:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:03.787 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:29:04.046 [2024-11-18 05:09:27.337123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:04.046 [2024-11-18 05:09:27.337767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93776 ] 00:29:04.046 [2024-11-18 05:09:27.513396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.305 [2024-11-18 05:09:27.754224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:04.305 [2024-11-18 05:09:27.754667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.305 [2024-11-18 05:09:27.755504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.305 [2024-11-18 05:09:27.755584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.305 [2024-11-18 05:09:27.755610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.683 05:09:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.683 05:09:29 -- common/autotest_common.sh@862 -- # return 0 00:29:05.683 05:09:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:29:05.683 05:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.683 05:09:29 -- common/autotest_common.sh@10 -- # set +x 00:29:05.683 nvme0n1 00:29:05.683 05:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.683 05:09:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:29:05.683 05:09:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_r4D1F.txt 00:29:05.683 05:09:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:29:05.683 05:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.683 05:09:29 -- common/autotest_common.sh@10 -- # set +x 00:29:05.683 true 00:29:05.683 05:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.683 05:09:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:29:05.683 05:09:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731906569 00:29:05.683 05:09:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=93812 00:29:05.683 05:09:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:29:05.683 05:09:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:05.683 05:09:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:29:08.214 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:08.214 05:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.214 05:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:08.214 [2024-11-18 05:09:31.150722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:08.214 [2024-11-18 05:09:31.151491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:08.214 [2024-11-18 05:09:31.151584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:29:08.214 [2024-11-18 05:09:31.151624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.214 [2024-11-18 05:09:31.153626] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:08.214 05:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.214 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 93812 00:29:08.214 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 93812 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 93812 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.215 05:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.215 05:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:08.215 05:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_r4D1F.txt 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_r4D1F.txt 00:29:08.215 05:09:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 93776 00:29:08.215 05:09:31 -- common/autotest_common.sh@936 -- # '[' -z 93776 ']' 00:29:08.215 05:09:31 -- common/autotest_common.sh@940 -- # kill -0 93776 00:29:08.215 05:09:31 -- common/autotest_common.sh@941 -- # uname 00:29:08.215 05:09:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:08.215 05:09:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93776 00:29:08.215 05:09:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:08.215 05:09:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:08.215 killing process with pid 93776 00:29:08.215 05:09:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93776' 00:29:08.215 05:09:31 -- common/autotest_common.sh@955 -- # kill 93776 00:29:08.215 05:09:31 -- common/autotest_common.sh@960 -- # wait 93776 00:29:10.119 05:09:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:29:10.119 05:09:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:29:10.119 00:29:10.119 real 0m6.081s 00:29:10.119 user 0m21.468s 00:29:10.119 sys 0m0.654s 00:29:10.119 05:09:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:10.119 05:09:33 -- common/autotest_common.sh@10 -- # set +x 00:29:10.119 ************************************ 00:29:10.119 END TEST bdev_nvme_reset_stuck_adm_cmd 00:29:10.119 ************************************ 00:29:10.119 05:09:33 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:29:10.119 05:09:33 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:29:10.119 05:09:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:10.119 05:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:10.119 05:09:33 -- common/autotest_common.sh@10 -- # set +x 00:29:10.119 ************************************ 00:29:10.119 START TEST nvme_fio 00:29:10.119 ************************************ 00:29:10.119 05:09:33 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:29:10.119 05:09:33 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:29:10.119 05:09:33 -- nvme/nvme.sh@32 -- # ran_fio=false 00:29:10.119 05:09:33 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:29:10.119 05:09:33 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:10.119 05:09:33 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:10.119 05:09:33 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:10.119 05:09:33 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:10.119 05:09:33 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:10.119 05:09:33 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:29:10.119 05:09:33 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:29:10.119 05:09:33 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:29:10.119 05:09:33 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:29:10.119 05:09:33 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:29:10.119 05:09:33 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:10.119 05:09:33 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:29:10.119 05:09:33 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:10.119 05:09:33 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:29:10.378 05:09:33 -- nvme/nvme.sh@41 -- # bs=4096 00:29:10.378 05:09:33 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:10.378 05:09:33 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:10.378 05:09:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:29:10.378 05:09:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:10.378 05:09:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:29:10.378 05:09:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:10.378 05:09:33 -- common/autotest_common.sh@1330 -- # shift 00:29:10.378 05:09:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:29:10.378 05:09:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.378 05:09:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:10.378 05:09:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:29:10.378 05:09:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:10.378 05:09:33 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:29:10.378 05:09:33 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:29:10.378 05:09:33 -- common/autotest_common.sh@1336 -- # break 00:29:10.378 05:09:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:10.378 05:09:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:10.378 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:10.378 fio-3.35 00:29:10.378 Starting 1 thread 00:29:13.667 00:29:13.667 test: (groupid=0, jobs=1): err= 0: pid=93947: Mon Nov 18 05:09:36 2024 00:29:13.667 read: IOPS=13.0k, BW=50.8MiB/s (53.2MB/s)(102MiB/2001msec) 00:29:13.667 slat (nsec): min=4021, max=77442, avg=6827.44, stdev=4239.07 00:29:13.667 clat (usec): min=383, max=10236, avg=4902.46, stdev=605.24 00:29:13.667 lat (usec): min=389, max=10313, avg=4909.28, stdev=606.07 00:29:13.667 clat percentiles (usec): 00:29:13.667 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4293], 20.00th=[ 4490], 00:29:13.667 | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:29:13.667 | 70.00th=[ 5080], 80.00th=[ 5407], 90.00th=[ 5669], 95.00th=[ 5800], 00:29:13.667 | 99.00th=[ 7046], 99.50th=[ 8029], 99.90th=[ 9241], 99.95th=[ 9372], 00:29:13.667 | 99.99th=[10159] 00:29:13.667 bw ( KiB/s): min=49224, max=54776, per=99.81%, avg=51882.67, stdev=2783.43, samples=3 00:29:13.667 iops : min=12306, max=13694, avg=12970.67, stdev=695.86, samples=3 00:29:13.667 write: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(101MiB/2001msec); 0 zone resets 00:29:13.667 slat (nsec): min=4161, max=59891, avg=6967.28, stdev=4200.64 00:29:13.667 clat (usec): min=289, max=10073, avg=4916.08, stdev=615.98 00:29:13.667 lat (usec): min=296, max=10096, avg=4923.04, stdev=616.75 00:29:13.667 clat percentiles (usec): 00:29:13.667 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:29:13.667 | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:29:13.667 | 70.00th=[ 5080], 80.00th=[ 5407], 90.00th=[ 5669], 95.00th=[ 5800], 00:29:13.667 | 99.00th=[ 7308], 99.50th=[ 8094], 99.90th=[ 9372], 99.95th=[ 9503], 00:29:13.667 | 99.99th=[ 9896] 00:29:13.667 bw ( KiB/s): min=49256, max=55120, per=100.00%, avg=51922.67, stdev=2967.80, samples=3 00:29:13.667 iops : min=12314, max=13780, avg=12980.67, stdev=741.95, samples=3 00:29:13.667 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:13.667 lat (msec) : 2=0.04%, 4=1.11%, 10=98.81%, 20=0.01% 00:29:13.667 cpu : usr=99.90%, sys=0.05%, ctx=5, majf=0, minf=609 00:29:13.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:13.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:13.667 issued rwts: total=26004,25975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.667 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:13.667 00:29:13.667 Run status group 0 (all jobs): 00:29:13.667 READ: bw=50.8MiB/s (53.2MB/s), 50.8MiB/s-50.8MiB/s (53.2MB/s-53.2MB/s), io=102MiB (107MB), run=2001-2001msec 00:29:13.667 WRITE: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=101MiB (106MB), run=2001-2001msec 00:29:13.667 ----------------------------------------------------- 00:29:13.667 Suppressions used: 00:29:13.667 count bytes template 00:29:13.667 1 32 /usr/src/fio/parse.c 00:29:13.667 ----------------------------------------------------- 00:29:13.667 00:29:13.667 05:09:37 -- nvme/nvme.sh@44 -- # ran_fio=true 00:29:13.667 05:09:37 -- nvme/nvme.sh@46 -- # true 00:29:13.667 00:29:13.667 real 0m3.822s 00:29:13.667 user 0m3.077s 00:29:13.667 sys 0m0.385s 00:29:13.667 05:09:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:13.667 ************************************ 00:29:13.667 END TEST nvme_fio 00:29:13.667 ************************************ 00:29:13.667 05:09:37 -- common/autotest_common.sh@10 -- # set +x 00:29:13.667 00:29:13.667 real 0m46.042s 00:29:13.667 user 2m5.183s 00:29:13.667 sys 0m8.089s 00:29:13.667 05:09:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:13.667 ************************************ 00:29:13.667 END TEST nvme 00:29:13.667 ************************************ 00:29:13.667 05:09:37 -- common/autotest_common.sh@10 -- # set +x 00:29:13.667 05:09:37 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:29:13.667 05:09:37 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:13.667 05:09:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:13.667 05:09:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:13.667 05:09:37 -- common/autotest_common.sh@10 -- # set +x 00:29:13.667 ************************************ 00:29:13.667 START TEST nvme_scc 00:29:13.667 ************************************ 00:29:13.667 05:09:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:13.667 * Looking for test storage... 00:29:13.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:13.667 05:09:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:13.667 05:09:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:13.667 05:09:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:13.927 05:09:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:13.927 05:09:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:13.927 05:09:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:13.927 05:09:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:13.927 05:09:37 -- scripts/common.sh@335 -- # IFS=.-: 00:29:13.927 05:09:37 -- scripts/common.sh@335 -- # read -ra ver1 00:29:13.927 05:09:37 -- scripts/common.sh@336 -- # IFS=.-: 00:29:13.927 05:09:37 -- scripts/common.sh@336 -- # read -ra ver2 00:29:13.927 05:09:37 -- scripts/common.sh@337 -- # local 'op=<' 00:29:13.927 05:09:37 -- scripts/common.sh@339 -- # ver1_l=2 00:29:13.927 05:09:37 -- scripts/common.sh@340 -- # ver2_l=1 00:29:13.927 05:09:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:13.927 05:09:37 -- scripts/common.sh@343 -- # case "$op" in 00:29:13.927 05:09:37 -- scripts/common.sh@344 -- # : 1 00:29:13.927 05:09:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:13.927 05:09:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:13.927 05:09:37 -- scripts/common.sh@364 -- # decimal 1 00:29:13.927 05:09:37 -- scripts/common.sh@352 -- # local d=1 00:29:13.927 05:09:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:13.927 05:09:37 -- scripts/common.sh@354 -- # echo 1 00:29:13.927 05:09:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:13.927 05:09:37 -- scripts/common.sh@365 -- # decimal 2 00:29:13.927 05:09:37 -- scripts/common.sh@352 -- # local d=2 00:29:13.927 05:09:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:13.927 05:09:37 -- scripts/common.sh@354 -- # echo 2 00:29:13.927 05:09:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:13.927 05:09:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:13.927 05:09:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:13.927 05:09:37 -- scripts/common.sh@367 -- # return 0 00:29:13.927 05:09:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:13.927 05:09:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.927 --rc genhtml_branch_coverage=1 00:29:13.927 --rc genhtml_function_coverage=1 00:29:13.927 --rc genhtml_legend=1 00:29:13.927 --rc geninfo_all_blocks=1 00:29:13.927 --rc geninfo_unexecuted_blocks=1 00:29:13.927 00:29:13.927 ' 00:29:13.927 05:09:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.927 --rc genhtml_branch_coverage=1 00:29:13.927 --rc genhtml_function_coverage=1 00:29:13.927 --rc genhtml_legend=1 00:29:13.927 --rc geninfo_all_blocks=1 00:29:13.927 --rc geninfo_unexecuted_blocks=1 00:29:13.927 00:29:13.927 ' 00:29:13.927 05:09:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.927 --rc genhtml_branch_coverage=1 00:29:13.927 --rc genhtml_function_coverage=1 00:29:13.927 --rc genhtml_legend=1 00:29:13.927 --rc geninfo_all_blocks=1 00:29:13.927 --rc geninfo_unexecuted_blocks=1 00:29:13.927 00:29:13.927 ' 00:29:13.927 05:09:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.927 --rc genhtml_branch_coverage=1 00:29:13.927 --rc genhtml_function_coverage=1 00:29:13.927 --rc genhtml_legend=1 00:29:13.927 --rc geninfo_all_blocks=1 00:29:13.927 --rc geninfo_unexecuted_blocks=1 00:29:13.927 00:29:13.927 ' 00:29:13.927 05:09:37 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:13.927 05:09:37 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:13.927 05:09:37 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:29:13.927 05:09:37 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:13.927 05:09:37 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:13.927 05:09:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.927 05:09:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.927 05:09:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.927 05:09:37 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:13.927 05:09:37 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:13.927 05:09:37 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:13.927 05:09:37 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:13.927 05:09:37 -- paths/export.sh@6 -- # export PATH 00:29:13.927 05:09:37 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:13.927 05:09:37 -- nvme/functions.sh@10 -- # ctrls=() 00:29:13.927 05:09:37 -- nvme/functions.sh@10 -- # declare -A ctrls 00:29:13.927 05:09:37 -- nvme/functions.sh@11 -- # nvmes=() 00:29:13.927 05:09:37 -- nvme/functions.sh@11 -- # declare -A nvmes 00:29:13.927 05:09:37 -- nvme/functions.sh@12 -- # bdfs=() 00:29:13.927 05:09:37 -- nvme/functions.sh@12 -- # declare -A bdfs 00:29:13.927 05:09:37 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:29:13.927 05:09:37 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:29:13.927 05:09:37 -- nvme/functions.sh@14 -- # nvme_name= 00:29:13.927 05:09:37 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:13.927 05:09:37 -- nvme/nvme_scc.sh@12 -- # uname 00:29:13.927 05:09:37 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:29:13.927 05:09:37 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:29:13.927 05:09:37 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:14.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:29:14.186 Waiting for block devices as requested 00:29:14.186 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:14.448 05:09:37 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:29:14.448 05:09:37 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:29:14.448 05:09:37 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:14.448 05:09:37 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:29:14.448 05:09:37 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:29:14.448 05:09:37 -- scripts/common.sh@15 -- # local i 00:29:14.448 05:09:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:29:14.448 05:09:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:14.448 05:09:37 -- scripts/common.sh@24 -- # return 0 00:29:14.448 05:09:37 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:29:14.448 05:09:37 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:29:14.448 05:09:37 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@18 -- # shift 00:29:14.448 05:09:37 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.448 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:29:14.448 05:09:37 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.448 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.449 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:29:14.449 05:09:37 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.449 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.450 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.450 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.450 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:29:14.451 05:09:37 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:14.451 05:09:37 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:29:14.451 05:09:37 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:29:14.451 05:09:37 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@18 -- # shift 00:29:14.451 05:09:37 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.451 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:29:14.451 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:29:14.451 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:14.452 05:09:37 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # IFS=: 00:29:14.452 05:09:37 -- nvme/functions.sh@21 -- # read -r reg val 00:29:14.452 05:09:37 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:29:14.452 05:09:37 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:29:14.452 05:09:37 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:29:14.452 05:09:37 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:29:14.452 05:09:37 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:29:14.452 05:09:37 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:29:14.452 05:09:37 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:29:14.452 05:09:37 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:29:14.452 05:09:37 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:29:14.452 05:09:37 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:29:14.452 05:09:37 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:29:14.452 05:09:37 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:29:14.452 05:09:37 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:29:14.452 05:09:37 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:14.452 05:09:37 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:29:14.452 05:09:37 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:29:14.452 05:09:37 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:29:14.452 05:09:37 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:29:14.452 05:09:37 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:29:14.452 05:09:37 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:29:14.452 05:09:37 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:29:14.452 05:09:37 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:29:14.452 05:09:37 -- nvme/functions.sh@76 -- # echo 0x15d 00:29:14.452 05:09:37 -- nvme/functions.sh@184 -- # oncs=0x15d 00:29:14.452 05:09:37 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:29:14.452 05:09:37 -- nvme/functions.sh@197 -- # echo nvme0 00:29:14.452 05:09:37 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:29:14.452 05:09:37 -- nvme/functions.sh@206 -- # echo nvme0 00:29:14.452 05:09:37 -- nvme/functions.sh@207 -- # return 0 00:29:14.452 05:09:37 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:29:14.452 05:09:37 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:29:14.452 05:09:37 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:15.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:29:15.020 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:15.588 05:09:38 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:15.588 05:09:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:29:15.588 05:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:15.588 05:09:38 -- common/autotest_common.sh@10 -- # set +x 00:29:15.588 ************************************ 00:29:15.588 START TEST nvme_simple_copy 00:29:15.588 ************************************ 00:29:15.588 05:09:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:15.851 Initializing NVMe Controllers 00:29:15.851 Attaching to 0000:00:06.0 00:29:15.851 Controller supports SCC. Attached to 0000:00:06.0 00:29:15.851 Namespace ID: 1 size: 5GB 00:29:15.851 Initialization complete. 00:29:15.851 00:29:15.851 Controller QEMU NVMe Ctrl (12340 ) 00:29:15.851 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:29:15.851 Namespace Block Size:4096 00:29:15.851 Writing LBAs 0 to 63 with Random Data 00:29:15.851 Copied LBAs from 0 - 63 to the Destination LBA 256 00:29:15.851 LBAs matching Written Data: 64 00:29:15.851 ************************************ 00:29:15.851 END TEST nvme_simple_copy 00:29:15.851 ************************************ 00:29:15.851 00:29:15.851 real 0m0.308s 00:29:15.851 user 0m0.124s 00:29:15.851 sys 0m0.083s 00:29:15.851 05:09:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:15.851 05:09:39 -- common/autotest_common.sh@10 -- # set +x 00:29:15.851 ************************************ 00:29:15.851 END TEST nvme_scc 00:29:15.851 ************************************ 00:29:15.851 00:29:15.852 real 0m2.250s 00:29:15.852 user 0m0.695s 00:29:15.852 sys 0m1.495s 00:29:15.852 05:09:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:15.852 05:09:39 -- common/autotest_common.sh@10 -- # set +x 00:29:16.110 05:09:39 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:29:16.110 05:09:39 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:29:16.110 05:09:39 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:29:16.110 05:09:39 -- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]] 00:29:16.110 05:09:39 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:29:16.110 05:09:39 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:16.110 05:09:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:16.110 05:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:16.110 05:09:39 -- common/autotest_common.sh@10 -- # set +x 00:29:16.110 ************************************ 00:29:16.110 START TEST nvme_rpc 00:29:16.110 ************************************ 00:29:16.110 05:09:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:16.110 * Looking for test storage... 00:29:16.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:16.110 05:09:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:16.110 05:09:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:16.110 05:09:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:16.110 05:09:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:16.110 05:09:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:16.110 05:09:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:16.110 05:09:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:16.110 05:09:39 -- scripts/common.sh@335 -- # IFS=.-: 00:29:16.110 05:09:39 -- scripts/common.sh@335 -- # read -ra ver1 00:29:16.110 05:09:39 -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.110 05:09:39 -- scripts/common.sh@336 -- # read -ra ver2 00:29:16.110 05:09:39 -- scripts/common.sh@337 -- # local 'op=<' 00:29:16.110 05:09:39 -- scripts/common.sh@339 -- # ver1_l=2 00:29:16.110 05:09:39 -- scripts/common.sh@340 -- # ver2_l=1 00:29:16.110 05:09:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:16.110 05:09:39 -- scripts/common.sh@343 -- # case "$op" in 00:29:16.110 05:09:39 -- scripts/common.sh@344 -- # : 1 00:29:16.110 05:09:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:16.110 05:09:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.110 05:09:39 -- scripts/common.sh@364 -- # decimal 1 00:29:16.110 05:09:39 -- scripts/common.sh@352 -- # local d=1 00:29:16.110 05:09:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.110 05:09:39 -- scripts/common.sh@354 -- # echo 1 00:29:16.110 05:09:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:16.110 05:09:39 -- scripts/common.sh@365 -- # decimal 2 00:29:16.110 05:09:39 -- scripts/common.sh@352 -- # local d=2 00:29:16.110 05:09:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.110 05:09:39 -- scripts/common.sh@354 -- # echo 2 00:29:16.110 05:09:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:16.110 05:09:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:16.110 05:09:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:16.110 05:09:39 -- scripts/common.sh@367 -- # return 0 00:29:16.110 05:09:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.110 05:09:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:16.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.110 --rc genhtml_branch_coverage=1 00:29:16.110 --rc genhtml_function_coverage=1 00:29:16.110 --rc genhtml_legend=1 00:29:16.110 --rc geninfo_all_blocks=1 00:29:16.110 --rc geninfo_unexecuted_blocks=1 00:29:16.110 00:29:16.110 ' 00:29:16.110 05:09:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:16.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.110 --rc genhtml_branch_coverage=1 00:29:16.110 --rc genhtml_function_coverage=1 00:29:16.110 --rc genhtml_legend=1 00:29:16.110 --rc geninfo_all_blocks=1 00:29:16.110 --rc geninfo_unexecuted_blocks=1 00:29:16.110 00:29:16.110 ' 00:29:16.110 05:09:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:16.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.110 --rc genhtml_branch_coverage=1 00:29:16.110 --rc genhtml_function_coverage=1 00:29:16.110 --rc genhtml_legend=1 00:29:16.110 --rc geninfo_all_blocks=1 00:29:16.110 --rc geninfo_unexecuted_blocks=1 00:29:16.110 00:29:16.110 ' 00:29:16.110 05:09:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:16.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.110 --rc genhtml_branch_coverage=1 00:29:16.110 --rc genhtml_function_coverage=1 00:29:16.110 --rc genhtml_legend=1 00:29:16.110 --rc geninfo_all_blocks=1 00:29:16.110 --rc geninfo_unexecuted_blocks=1 00:29:16.110 00:29:16.110 ' 00:29:16.110 05:09:39 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:16.110 05:09:39 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:29:16.111 05:09:39 -- common/autotest_common.sh@1519 -- # bdfs=() 00:29:16.111 05:09:39 -- common/autotest_common.sh@1519 -- # local bdfs 00:29:16.111 05:09:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:16.111 05:09:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:16.111 05:09:39 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:16.111 05:09:39 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:16.111 05:09:39 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:16.111 05:09:39 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:16.111 05:09:39 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:16.111 05:09:39 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:29:16.111 05:09:39 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:29:16.111 05:09:39 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:29:16.111 05:09:39 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:29:16.111 05:09:39 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=94391 00:29:16.111 05:09:39 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:16.111 05:09:39 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:29:16.111 05:09:39 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 94391 00:29:16.111 05:09:39 -- common/autotest_common.sh@829 -- # '[' -z 94391 ']' 00:29:16.111 05:09:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.111 05:09:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:16.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.111 05:09:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.111 05:09:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:16.111 05:09:39 -- common/autotest_common.sh@10 -- # set +x 00:29:16.370 [2024-11-18 05:09:39.694658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:16.370 [2024-11-18 05:09:39.694838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94391 ] 00:29:16.370 [2024-11-18 05:09:39.868864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:16.629 [2024-11-18 05:09:40.068163] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:16.629 [2024-11-18 05:09:40.068535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.629 [2024-11-18 05:09:40.068563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.004 05:09:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:18.004 05:09:41 -- common/autotest_common.sh@862 -- # return 0 00:29:18.004 05:09:41 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:29:18.262 Nvme0n1 00:29:18.262 05:09:41 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:29:18.263 05:09:41 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:29:18.521 request: 00:29:18.521 { 00:29:18.521 "filename": "non_existing_file", 00:29:18.521 "bdev_name": "Nvme0n1", 00:29:18.521 "method": "bdev_nvme_apply_firmware", 00:29:18.521 "req_id": 1 00:29:18.521 } 00:29:18.521 Got JSON-RPC error response 00:29:18.521 response: 00:29:18.521 { 00:29:18.521 "code": -32603, 00:29:18.521 "message": "open file failed." 00:29:18.521 } 00:29:18.521 05:09:41 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:29:18.521 05:09:41 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:29:18.521 05:09:41 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:18.521 05:09:42 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:18.521 05:09:42 -- nvme/nvme_rpc.sh@40 -- # killprocess 94391 00:29:18.521 05:09:42 -- common/autotest_common.sh@936 -- # '[' -z 94391 ']' 00:29:18.521 05:09:42 -- common/autotest_common.sh@940 -- # kill -0 94391 00:29:18.521 05:09:42 -- common/autotest_common.sh@941 -- # uname 00:29:18.521 05:09:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:18.521 05:09:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94391 00:29:18.780 05:09:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:18.780 05:09:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:18.780 05:09:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94391' 00:29:18.780 killing process with pid 94391 00:29:18.780 05:09:42 -- common/autotest_common.sh@955 -- # kill 94391 00:29:18.780 05:09:42 -- common/autotest_common.sh@960 -- # wait 94391 00:29:20.684 00:29:20.684 real 0m4.304s 00:29:20.684 user 0m8.160s 00:29:20.684 sys 0m0.679s 00:29:20.684 05:09:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:20.684 05:09:43 -- common/autotest_common.sh@10 -- # set +x 00:29:20.684 ************************************ 00:29:20.684 END TEST nvme_rpc 00:29:20.684 ************************************ 00:29:20.684 05:09:43 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:20.684 05:09:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:20.684 05:09:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:20.684 05:09:43 -- common/autotest_common.sh@10 -- # set +x 00:29:20.684 ************************************ 00:29:20.684 START TEST nvme_rpc_timeouts 00:29:20.684 ************************************ 00:29:20.684 05:09:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:20.684 * Looking for test storage... 00:29:20.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:20.684 05:09:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:20.684 05:09:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:20.684 05:09:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:20.684 05:09:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:20.684 05:09:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:20.684 05:09:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:20.684 05:09:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:20.684 05:09:43 -- scripts/common.sh@335 -- # IFS=.-: 00:29:20.684 05:09:43 -- scripts/common.sh@335 -- # read -ra ver1 00:29:20.684 05:09:43 -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.684 05:09:43 -- scripts/common.sh@336 -- # read -ra ver2 00:29:20.684 05:09:43 -- scripts/common.sh@337 -- # local 'op=<' 00:29:20.684 05:09:43 -- scripts/common.sh@339 -- # ver1_l=2 00:29:20.684 05:09:43 -- scripts/common.sh@340 -- # ver2_l=1 00:29:20.684 05:09:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:20.684 05:09:43 -- scripts/common.sh@343 -- # case "$op" in 00:29:20.684 05:09:43 -- scripts/common.sh@344 -- # : 1 00:29:20.684 05:09:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:20.684 05:09:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.684 05:09:43 -- scripts/common.sh@364 -- # decimal 1 00:29:20.684 05:09:43 -- scripts/common.sh@352 -- # local d=1 00:29:20.684 05:09:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.684 05:09:43 -- scripts/common.sh@354 -- # echo 1 00:29:20.684 05:09:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:20.684 05:09:43 -- scripts/common.sh@365 -- # decimal 2 00:29:20.684 05:09:43 -- scripts/common.sh@352 -- # local d=2 00:29:20.684 05:09:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.684 05:09:43 -- scripts/common.sh@354 -- # echo 2 00:29:20.684 05:09:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:20.684 05:09:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:20.684 05:09:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:20.684 05:09:43 -- scripts/common.sh@367 -- # return 0 00:29:20.684 05:09:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.684 05:09:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:20.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.684 --rc genhtml_branch_coverage=1 00:29:20.684 --rc genhtml_function_coverage=1 00:29:20.684 --rc genhtml_legend=1 00:29:20.684 --rc geninfo_all_blocks=1 00:29:20.685 --rc geninfo_unexecuted_blocks=1 00:29:20.685 00:29:20.685 ' 00:29:20.685 05:09:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:20.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.685 --rc genhtml_branch_coverage=1 00:29:20.685 --rc genhtml_function_coverage=1 00:29:20.685 --rc genhtml_legend=1 00:29:20.685 --rc geninfo_all_blocks=1 00:29:20.685 --rc geninfo_unexecuted_blocks=1 00:29:20.685 00:29:20.685 ' 00:29:20.685 05:09:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:20.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.685 --rc genhtml_branch_coverage=1 00:29:20.685 --rc genhtml_function_coverage=1 00:29:20.685 --rc genhtml_legend=1 00:29:20.685 --rc geninfo_all_blocks=1 00:29:20.685 --rc geninfo_unexecuted_blocks=1 00:29:20.685 00:29:20.685 ' 00:29:20.685 05:09:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:20.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.685 --rc genhtml_branch_coverage=1 00:29:20.685 --rc genhtml_function_coverage=1 00:29:20.685 --rc genhtml_legend=1 00:29:20.685 --rc geninfo_all_blocks=1 00:29:20.685 --rc geninfo_unexecuted_blocks=1 00:29:20.685 00:29:20.685 ' 00:29:20.685 05:09:43 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:20.685 05:09:43 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_94464 00:29:20.685 05:09:43 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_94464 00:29:20.685 05:09:43 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=94495 00:29:20.685 05:09:43 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:29:20.685 05:09:43 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:20.685 05:09:43 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 94495 00:29:20.685 05:09:43 -- common/autotest_common.sh@829 -- # '[' -z 94495 ']' 00:29:20.685 05:09:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.685 05:09:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.685 05:09:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.685 05:09:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.685 05:09:43 -- common/autotest_common.sh@10 -- # set +x 00:29:20.685 [2024-11-18 05:09:43.982325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:20.685 [2024-11-18 05:09:43.982460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94495 ] 00:29:20.685 [2024-11-18 05:09:44.136899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:20.968 [2024-11-18 05:09:44.301951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:20.968 [2024-11-18 05:09:44.302384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.968 [2024-11-18 05:09:44.302394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.562 05:09:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.562 05:09:44 -- common/autotest_common.sh@862 -- # return 0 00:29:21.562 Checking default timeout settings: 00:29:21.562 05:09:44 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:29:21.562 05:09:44 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:21.820 Making settings changes with rpc: 00:29:21.820 05:09:45 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:29:21.820 05:09:45 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:29:22.078 Check default vs. modified settings: 00:29:22.078 05:09:45 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:29:22.078 05:09:45 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_94464 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_94464 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:29:22.645 Setting action_on_timeout is changed as expected. 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_94464 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_94464 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:29:22.645 Setting timeout_us is changed as expected. 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_94464 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_94464 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:29:22.645 Setting timeout_admin_us is changed as expected. 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_94464 /tmp/settings_modified_94464 00:29:22.645 05:09:45 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 94495 00:29:22.645 05:09:45 -- common/autotest_common.sh@936 -- # '[' -z 94495 ']' 00:29:22.645 05:09:45 -- common/autotest_common.sh@940 -- # kill -0 94495 00:29:22.645 05:09:45 -- common/autotest_common.sh@941 -- # uname 00:29:22.645 05:09:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:22.645 05:09:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94495 00:29:22.645 05:09:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:22.645 killing process with pid 94495 00:29:22.645 05:09:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:22.645 05:09:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94495' 00:29:22.645 05:09:45 -- common/autotest_common.sh@955 -- # kill 94495 00:29:22.645 05:09:45 -- common/autotest_common.sh@960 -- # wait 94495 00:29:24.547 RPC TIMEOUT SETTING TEST PASSED. 00:29:24.547 05:09:47 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:29:24.547 00:29:24.547 real 0m3.926s 00:29:24.547 user 0m7.601s 00:29:24.547 sys 0m0.581s 00:29:24.547 05:09:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:24.547 05:09:47 -- common/autotest_common.sh@10 -- # set +x 00:29:24.547 ************************************ 00:29:24.547 END TEST nvme_rpc_timeouts 00:29:24.547 ************************************ 00:29:24.547 05:09:47 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]] 00:29:24.547 05:09:47 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@255 -- # timing_exit lib 00:29:24.547 05:09:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.547 05:09:47 -- common/autotest_common.sh@10 -- # set +x 00:29:24.547 05:09:47 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:24.547 05:09:47 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:29:24.547 05:09:47 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:29:24.547 05:09:47 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:24.547 05:09:47 -- spdk/autotest.sh@365 -- # [[ 1 -eq 1 ]] 00:29:24.547 05:09:47 -- spdk/autotest.sh@366 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:29:24.547 05:09:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:24.547 05:09:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:24.547 05:09:47 -- common/autotest_common.sh@10 -- # set +x 00:29:24.547 ************************************ 00:29:24.547 START TEST blockdev_raid5f 00:29:24.547 ************************************ 00:29:24.547 05:09:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:29:24.547 * Looking for test storage... 00:29:24.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:24.547 05:09:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:24.547 05:09:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:24.547 05:09:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:24.547 05:09:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:24.547 05:09:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:24.547 05:09:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:24.547 05:09:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:24.547 05:09:47 -- scripts/common.sh@335 -- # IFS=.-: 00:29:24.547 05:09:47 -- scripts/common.sh@335 -- # read -ra ver1 00:29:24.547 05:09:47 -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.547 05:09:47 -- scripts/common.sh@336 -- # read -ra ver2 00:29:24.547 05:09:47 -- scripts/common.sh@337 -- # local 'op=<' 00:29:24.547 05:09:47 -- scripts/common.sh@339 -- # ver1_l=2 00:29:24.547 05:09:47 -- scripts/common.sh@340 -- # ver2_l=1 00:29:24.547 05:09:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:24.547 05:09:47 -- scripts/common.sh@343 -- # case "$op" in 00:29:24.547 05:09:47 -- scripts/common.sh@344 -- # : 1 00:29:24.547 05:09:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:24.547 05:09:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.547 05:09:47 -- scripts/common.sh@364 -- # decimal 1 00:29:24.547 05:09:47 -- scripts/common.sh@352 -- # local d=1 00:29:24.547 05:09:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.547 05:09:47 -- scripts/common.sh@354 -- # echo 1 00:29:24.547 05:09:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:24.547 05:09:47 -- scripts/common.sh@365 -- # decimal 2 00:29:24.547 05:09:47 -- scripts/common.sh@352 -- # local d=2 00:29:24.547 05:09:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.547 05:09:47 -- scripts/common.sh@354 -- # echo 2 00:29:24.547 05:09:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:24.547 05:09:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:24.547 05:09:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:24.547 05:09:47 -- scripts/common.sh@367 -- # return 0 00:29:24.547 05:09:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.547 05:09:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:24.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.547 --rc genhtml_branch_coverage=1 00:29:24.547 --rc genhtml_function_coverage=1 00:29:24.547 --rc genhtml_legend=1 00:29:24.547 --rc geninfo_all_blocks=1 00:29:24.547 --rc geninfo_unexecuted_blocks=1 00:29:24.547 00:29:24.547 ' 00:29:24.547 05:09:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:24.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.547 --rc genhtml_branch_coverage=1 00:29:24.547 --rc genhtml_function_coverage=1 00:29:24.547 --rc genhtml_legend=1 00:29:24.547 --rc geninfo_all_blocks=1 00:29:24.547 --rc geninfo_unexecuted_blocks=1 00:29:24.547 00:29:24.547 ' 00:29:24.547 05:09:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:24.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.547 --rc genhtml_branch_coverage=1 00:29:24.547 --rc genhtml_function_coverage=1 00:29:24.547 --rc genhtml_legend=1 00:29:24.547 --rc geninfo_all_blocks=1 00:29:24.547 --rc geninfo_unexecuted_blocks=1 00:29:24.547 00:29:24.547 ' 00:29:24.547 05:09:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:24.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.547 --rc genhtml_branch_coverage=1 00:29:24.547 --rc genhtml_function_coverage=1 00:29:24.547 --rc genhtml_legend=1 00:29:24.547 --rc geninfo_all_blocks=1 00:29:24.547 --rc geninfo_unexecuted_blocks=1 00:29:24.547 00:29:24.547 ' 00:29:24.547 05:09:47 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:24.548 05:09:47 -- bdev/nbd_common.sh@6 -- # set -e 00:29:24.548 05:09:47 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:24.548 05:09:47 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:24.548 05:09:47 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:24.548 05:09:47 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:24.548 05:09:47 -- bdev/blockdev.sh@18 -- # : 00:29:24.548 05:09:47 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:24.548 05:09:47 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:24.548 05:09:47 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:24.548 05:09:47 -- bdev/blockdev.sh@672 -- # uname -s 00:29:24.548 05:09:47 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:24.548 05:09:47 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:24.548 05:09:47 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:29:24.548 05:09:47 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:24.548 05:09:47 -- bdev/blockdev.sh@682 -- # dek= 00:29:24.548 05:09:47 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:24.548 05:09:47 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:24.548 05:09:47 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:24.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.548 05:09:47 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:29:24.548 05:09:47 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:29:24.548 05:09:47 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:24.548 05:09:47 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=94643 00:29:24.548 05:09:47 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:24.548 05:09:47 -- bdev/blockdev.sh@47 -- # waitforlisten 94643 00:29:24.548 05:09:47 -- common/autotest_common.sh@829 -- # '[' -z 94643 ']' 00:29:24.548 05:09:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.548 05:09:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.548 05:09:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.548 05:09:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.548 05:09:47 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:24.548 05:09:47 -- common/autotest_common.sh@10 -- # set +x 00:29:24.548 [2024-11-18 05:09:48.027602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:24.548 [2024-11-18 05:09:48.027772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94643 ] 00:29:24.806 [2024-11-18 05:09:48.196160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.065 [2024-11-18 05:09:48.352968] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:25.065 [2024-11-18 05:09:48.353185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.440 05:09:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.440 05:09:49 -- common/autotest_common.sh@862 -- # return 0 00:29:26.440 05:09:49 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:26.440 05:09:49 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:29:26.440 05:09:49 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:29:26.440 05:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.440 05:09:49 -- common/autotest_common.sh@10 -- # set +x 00:29:26.440 Malloc0 00:29:26.440 Malloc1 00:29:26.440 Malloc2 00:29:26.440 05:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.440 05:09:49 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:26.440 05:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.440 05:09:49 -- common/autotest_common.sh@10 -- # set +x 00:29:26.440 05:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.440 05:09:49 -- bdev/blockdev.sh@738 -- # cat 00:29:26.440 05:09:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:26.440 05:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.440 05:09:49 -- common/autotest_common.sh@10 -- # set +x 00:29:26.440 05:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.440 05:09:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:26.440 05:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.440 05:09:49 -- common/autotest_common.sh@10 -- # set +x 00:29:26.440 05:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.440 05:09:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:26.440 05:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.440 05:09:49 -- common/autotest_common.sh@10 -- # set +x 00:29:26.440 05:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.440 05:09:49 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:26.440 05:09:49 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:26.440 05:09:49 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:26.440 05:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.440 05:09:49 -- common/autotest_common.sh@10 -- # set +x 00:29:26.440 05:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.440 05:09:49 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:26.440 05:09:49 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "55a1c3fb-6bdd-4b9b-aa4e-748f86639211"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "55a1c3fb-6bdd-4b9b-aa4e-748f86639211",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "55a1c3fb-6bdd-4b9b-aa4e-748f86639211",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b87886ec-4c26-4e0b-88ee-b0d61820b3ae",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "6daed8f8-39bd-42d1-bcef-985993859220",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "bda4e55c-7d93-496a-b78c-7708b3717a97",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:26.440 05:09:49 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:26.440 05:09:49 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:26.440 05:09:49 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:29:26.440 05:09:49 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:26.440 05:09:49 -- bdev/blockdev.sh@752 -- # killprocess 94643 00:29:26.440 05:09:49 -- common/autotest_common.sh@936 -- # '[' -z 94643 ']' 00:29:26.440 05:09:49 -- common/autotest_common.sh@940 -- # kill -0 94643 00:29:26.440 05:09:49 -- common/autotest_common.sh@941 -- # uname 00:29:26.440 05:09:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:26.440 05:09:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94643 00:29:26.440 05:09:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:26.440 05:09:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:26.440 killing process with pid 94643 00:29:26.440 05:09:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94643' 00:29:26.441 05:09:49 -- common/autotest_common.sh@955 -- # kill 94643 00:29:26.441 05:09:49 -- common/autotest_common.sh@960 -- # wait 94643 00:29:28.341 05:09:51 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:28.341 05:09:51 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:29:28.341 05:09:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:29:28.341 05:09:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:28.341 05:09:51 -- common/autotest_common.sh@10 -- # set +x 00:29:28.341 ************************************ 00:29:28.341 START TEST bdev_hello_world 00:29:28.341 ************************************ 00:29:28.341 05:09:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:29:28.341 [2024-11-18 05:09:51.852543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:28.341 [2024-11-18 05:09:51.852707] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94707 ] 00:29:28.599 [2024-11-18 05:09:52.023555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.857 [2024-11-18 05:09:52.169678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.116 [2024-11-18 05:09:52.553547] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:29.116 [2024-11-18 05:09:52.553608] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:29:29.116 [2024-11-18 05:09:52.553649] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:29.116 [2024-11-18 05:09:52.554295] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:29.116 [2024-11-18 05:09:52.554462] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:29.116 [2024-11-18 05:09:52.554515] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:29.116 [2024-11-18 05:09:52.554577] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:29.116 00:29:29.116 [2024-11-18 05:09:52.554601] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:30.490 00:29:30.490 real 0m1.846s 00:29:30.490 user 0m1.521s 00:29:30.490 sys 0m0.216s 00:29:30.490 05:09:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:30.490 ************************************ 00:29:30.490 END TEST bdev_hello_world 00:29:30.490 ************************************ 00:29:30.490 05:09:53 -- common/autotest_common.sh@10 -- # set +x 00:29:30.490 05:09:53 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:30.490 05:09:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:30.490 05:09:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:30.490 05:09:53 -- common/autotest_common.sh@10 -- # set +x 00:29:30.490 ************************************ 00:29:30.490 START TEST bdev_bounds 00:29:30.490 ************************************ 00:29:30.490 05:09:53 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:29:30.490 05:09:53 -- bdev/blockdev.sh@288 -- # bdevio_pid=94745 00:29:30.490 05:09:53 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:30.490 05:09:53 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:30.490 Process bdevio pid: 94745 00:29:30.490 05:09:53 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 94745' 00:29:30.490 05:09:53 -- bdev/blockdev.sh@291 -- # waitforlisten 94745 00:29:30.490 05:09:53 -- common/autotest_common.sh@829 -- # '[' -z 94745 ']' 00:29:30.490 05:09:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.490 05:09:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:30.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.490 05:09:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.490 05:09:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:30.490 05:09:53 -- common/autotest_common.sh@10 -- # set +x 00:29:30.490 [2024-11-18 05:09:53.740951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:30.490 [2024-11-18 05:09:53.741094] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94745 ] 00:29:30.490 [2024-11-18 05:09:53.887525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:30.748 [2024-11-18 05:09:54.041706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.748 [2024-11-18 05:09:54.041856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.748 [2024-11-18 05:09:54.041876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.314 05:09:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:31.314 05:09:54 -- common/autotest_common.sh@862 -- # return 0 00:29:31.314 05:09:54 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:31.314 I/O targets: 00:29:31.314 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:29:31.314 00:29:31.314 00:29:31.314 CUnit - A unit testing framework for C - Version 2.1-3 00:29:31.314 http://cunit.sourceforge.net/ 00:29:31.314 00:29:31.314 00:29:31.314 Suite: bdevio tests on: raid5f 00:29:31.314 Test: blockdev write read block ...passed 00:29:31.314 Test: blockdev write zeroes read block ...passed 00:29:31.314 Test: blockdev write zeroes read no split ...passed 00:29:31.573 Test: blockdev write zeroes read split ...passed 00:29:31.573 Test: blockdev write zeroes read split partial ...passed 00:29:31.573 Test: blockdev reset ...passed 00:29:31.573 Test: blockdev write read 8 blocks ...passed 00:29:31.573 Test: blockdev write read size > 128k ...passed 00:29:31.573 Test: blockdev write read invalid size ...passed 00:29:31.573 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:31.573 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:31.573 Test: blockdev write read max offset ...passed 00:29:31.573 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:31.573 Test: blockdev writev readv 8 blocks ...passed 00:29:31.573 Test: blockdev writev readv 30 x 1block ...passed 00:29:31.573 Test: blockdev writev readv block ...passed 00:29:31.573 Test: blockdev writev readv size > 128k ...passed 00:29:31.573 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:31.573 Test: blockdev comparev and writev ...passed 00:29:31.573 Test: blockdev nvme passthru rw ...passed 00:29:31.573 Test: blockdev nvme passthru vendor specific ...passed 00:29:31.573 Test: blockdev nvme admin passthru ...passed 00:29:31.573 Test: blockdev copy ...passed 00:29:31.573 00:29:31.573 Run Summary: Type Total Ran Passed Failed Inactive 00:29:31.573 suites 1 1 n/a 0 0 00:29:31.573 tests 23 23 23 0 0 00:29:31.573 asserts 130 130 130 0 n/a 00:29:31.573 00:29:31.573 Elapsed time = 0.452 seconds 00:29:31.573 0 00:29:31.573 05:09:54 -- bdev/blockdev.sh@293 -- # killprocess 94745 00:29:31.573 05:09:54 -- common/autotest_common.sh@936 -- # '[' -z 94745 ']' 00:29:31.573 05:09:54 -- common/autotest_common.sh@940 -- # kill -0 94745 00:29:31.573 05:09:54 -- common/autotest_common.sh@941 -- # uname 00:29:31.573 05:09:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:31.573 05:09:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94745 00:29:31.573 05:09:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:31.573 05:09:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:31.573 killing process with pid 94745 00:29:31.573 05:09:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94745' 00:29:31.573 05:09:55 -- common/autotest_common.sh@955 -- # kill 94745 00:29:31.573 05:09:55 -- common/autotest_common.sh@960 -- # wait 94745 00:29:32.949 05:09:56 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:32.949 00:29:32.949 real 0m2.490s 00:29:32.949 user 0m6.146s 00:29:32.949 sys 0m0.313s 00:29:32.949 05:09:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:32.949 05:09:56 -- common/autotest_common.sh@10 -- # set +x 00:29:32.949 ************************************ 00:29:32.949 END TEST bdev_bounds 00:29:32.949 ************************************ 00:29:32.949 05:09:56 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:32.949 05:09:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:29:32.949 05:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:32.949 05:09:56 -- common/autotest_common.sh@10 -- # set +x 00:29:32.950 ************************************ 00:29:32.950 START TEST bdev_nbd 00:29:32.950 ************************************ 00:29:32.950 05:09:56 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:32.950 05:09:56 -- bdev/blockdev.sh@298 -- # uname -s 00:29:32.950 05:09:56 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:32.950 05:09:56 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:32.950 05:09:56 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:32.950 05:09:56 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:29:32.950 05:09:56 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:32.950 05:09:56 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:32.950 05:09:56 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:32.950 05:09:56 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:32.950 05:09:56 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:32.950 05:09:56 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:32.950 05:09:56 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:29:32.950 05:09:56 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:32.950 05:09:56 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:29:32.950 05:09:56 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:32.950 05:09:56 -- bdev/blockdev.sh@316 -- # nbd_pid=94799 00:29:32.950 05:09:56 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:32.950 05:09:56 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:32.950 05:09:56 -- bdev/blockdev.sh@318 -- # waitforlisten 94799 /var/tmp/spdk-nbd.sock 00:29:32.950 05:09:56 -- common/autotest_common.sh@829 -- # '[' -z 94799 ']' 00:29:32.950 05:09:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:32.950 05:09:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:32.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:32.950 05:09:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:32.950 05:09:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:32.950 05:09:56 -- common/autotest_common.sh@10 -- # set +x 00:29:32.950 [2024-11-18 05:09:56.311519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:32.950 [2024-11-18 05:09:56.311681] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.208 [2024-11-18 05:09:56.478157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.208 [2024-11-18 05:09:56.624292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.775 05:09:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.775 05:09:57 -- common/autotest_common.sh@862 -- # return 0 00:29:33.775 05:09:57 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@24 -- # local i 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:33.775 05:09:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:29:34.034 05:09:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:34.034 05:09:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:34.034 05:09:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:34.034 05:09:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:34.034 05:09:57 -- common/autotest_common.sh@867 -- # local i 00:29:34.034 05:09:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:34.034 05:09:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:34.034 05:09:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:34.034 05:09:57 -- common/autotest_common.sh@871 -- # break 00:29:34.034 05:09:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:34.034 05:09:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:34.034 05:09:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:34.034 1+0 records in 00:29:34.034 1+0 records out 00:29:34.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137924 s, 3.0 MB/s 00:29:34.034 05:09:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:34.034 05:09:57 -- common/autotest_common.sh@884 -- # size=4096 00:29:34.034 05:09:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:34.034 05:09:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:34.034 05:09:57 -- common/autotest_common.sh@887 -- # return 0 00:29:34.034 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:34.034 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:34.034 05:09:57 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:34.293 { 00:29:34.293 "nbd_device": "/dev/nbd0", 00:29:34.293 "bdev_name": "raid5f" 00:29:34.293 } 00:29:34.293 ]' 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:34.293 { 00:29:34.293 "nbd_device": "/dev/nbd0", 00:29:34.293 "bdev_name": "raid5f" 00:29:34.293 } 00:29:34.293 ]' 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@51 -- # local i 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:34.293 05:09:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@41 -- # break 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@45 -- # return 0 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.552 05:09:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@65 -- # true 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@65 -- # count=0 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@122 -- # count=0 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@127 -- # return 0 00:29:34.811 05:09:58 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@12 -- # local i 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:34.811 05:09:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:29:35.070 /dev/nbd0 00:29:35.070 05:09:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:35.070 05:09:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:35.070 05:09:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:35.070 05:09:58 -- common/autotest_common.sh@867 -- # local i 00:29:35.070 05:09:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:35.070 05:09:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:35.070 05:09:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:35.070 05:09:58 -- common/autotest_common.sh@871 -- # break 00:29:35.070 05:09:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:35.070 05:09:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:35.070 05:09:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:35.070 1+0 records in 00:29:35.070 1+0 records out 00:29:35.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321011 s, 12.8 MB/s 00:29:35.070 05:09:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.070 05:09:58 -- common/autotest_common.sh@884 -- # size=4096 00:29:35.070 05:09:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.070 05:09:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:35.070 05:09:58 -- common/autotest_common.sh@887 -- # return 0 00:29:35.070 05:09:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:35.070 05:09:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.070 05:09:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:35.070 05:09:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.070 05:09:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:35.070 05:09:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:35.070 { 00:29:35.070 "nbd_device": "/dev/nbd0", 00:29:35.070 "bdev_name": "raid5f" 00:29:35.070 } 00:29:35.070 ]' 00:29:35.070 05:09:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:35.070 { 00:29:35.070 "nbd_device": "/dev/nbd0", 00:29:35.070 "bdev_name": "raid5f" 00:29:35.070 } 00:29:35.070 ]' 00:29:35.070 05:09:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@65 -- # count=1 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@95 -- # count=1 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:35.328 05:09:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:35.329 256+0 records in 00:29:35.329 256+0 records out 00:29:35.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100593 s, 104 MB/s 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:35.329 256+0 records in 00:29:35.329 256+0 records out 00:29:35.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348894 s, 30.1 MB/s 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@51 -- # local i 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:35.329 05:09:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@41 -- # break 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@45 -- # return 0 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.588 05:09:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:35.864 05:09:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@65 -- # true 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@65 -- # count=0 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@104 -- # count=0 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@109 -- # return 0 00:29:35.865 05:09:59 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:35.865 05:09:59 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:36.129 malloc_lvol_verify 00:29:36.129 05:09:59 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:36.129 985694fb-82e4-4aeb-9cf7-39190c1122ab 00:29:36.129 05:09:59 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:36.387 f1d64a28-2b5f-4c17-b095-33edd0f03f95 00:29:36.387 05:09:59 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:36.645 /dev/nbd0 00:29:36.645 05:10:00 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:36.645 mke2fs 1.47.0 (5-Feb-2023) 00:29:36.645 00:29:36.645 Filesystem too small for a journal 00:29:36.645 Discarding device blocks: 0/1024 done 00:29:36.645 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:36.645 00:29:36.645 Allocating group tables: 0/1 done 00:29:36.645 Writing inode tables: 0/1 done 00:29:36.645 Writing superblocks and filesystem accounting information: 0/1 done 00:29:36.645 00:29:36.645 05:10:00 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:36.645 05:10:00 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:36.645 05:10:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:36.645 05:10:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:36.645 05:10:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:36.645 05:10:00 -- bdev/nbd_common.sh@51 -- # local i 00:29:36.645 05:10:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:36.645 05:10:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:36.904 05:10:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:36.904 05:10:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:36.904 05:10:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:36.904 05:10:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:36.904 05:10:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:36.904 05:10:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:36.904 05:10:00 -- bdev/nbd_common.sh@41 -- # break 00:29:36.904 05:10:00 -- bdev/nbd_common.sh@45 -- # return 0 00:29:36.904 05:10:00 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:36.904 05:10:00 -- bdev/nbd_common.sh@147 -- # return 0 00:29:36.904 05:10:00 -- bdev/blockdev.sh@324 -- # killprocess 94799 00:29:36.904 05:10:00 -- common/autotest_common.sh@936 -- # '[' -z 94799 ']' 00:29:36.904 05:10:00 -- common/autotest_common.sh@940 -- # kill -0 94799 00:29:36.904 05:10:00 -- common/autotest_common.sh@941 -- # uname 00:29:36.904 05:10:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:36.904 05:10:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94799 00:29:36.904 05:10:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:36.904 05:10:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:36.904 killing process with pid 94799 00:29:36.904 05:10:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94799' 00:29:36.904 05:10:00 -- common/autotest_common.sh@955 -- # kill 94799 00:29:36.904 05:10:00 -- common/autotest_common.sh@960 -- # wait 94799 00:29:38.282 05:10:01 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:38.282 00:29:38.282 real 0m5.257s 00:29:38.282 user 0m7.456s 00:29:38.282 sys 0m1.100s 00:29:38.282 05:10:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:38.282 05:10:01 -- common/autotest_common.sh@10 -- # set +x 00:29:38.282 ************************************ 00:29:38.282 END TEST bdev_nbd 00:29:38.282 ************************************ 00:29:38.282 05:10:01 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:38.282 05:10:01 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:29:38.282 05:10:01 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:29:38.282 05:10:01 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:38.282 05:10:01 -- common/autotest_common.sh@10 -- # set +x 00:29:38.282 ************************************ 00:29:38.282 START TEST bdev_fio 00:29:38.282 ************************************ 00:29:38.282 05:10:01 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:29:38.282 05:10:01 -- bdev/blockdev.sh@329 -- # local env_context 00:29:38.282 05:10:01 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:29:38.282 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:29:38.282 05:10:01 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:29:38.282 05:10:01 -- bdev/blockdev.sh@337 -- # echo '' 00:29:38.282 05:10:01 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:29:38.282 05:10:01 -- bdev/blockdev.sh@337 -- # env_context= 00:29:38.282 05:10:01 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:38.282 05:10:01 -- common/autotest_common.sh@1270 -- # local workload=verify 00:29:38.282 05:10:01 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:29:38.282 05:10:01 -- common/autotest_common.sh@1272 -- # local env_context= 00:29:38.282 05:10:01 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:29:38.282 05:10:01 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:38.282 05:10:01 -- common/autotest_common.sh@1290 -- # cat 00:29:38.282 05:10:01 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1303 -- # cat 00:29:38.282 05:10:01 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:29:38.282 05:10:01 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:29:38.282 05:10:01 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:29:38.282 05:10:01 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:29:38.282 05:10:01 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:29:38.282 05:10:01 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:29:38.282 05:10:01 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:29:38.282 05:10:01 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:38.282 05:10:01 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:38.282 05:10:01 -- common/autotest_common.sh@10 -- # set +x 00:29:38.282 ************************************ 00:29:38.282 START TEST bdev_fio_rw_verify 00:29:38.282 ************************************ 00:29:38.282 05:10:01 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:38.282 05:10:01 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:38.282 05:10:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:29:38.282 05:10:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:38.282 05:10:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:29:38.282 05:10:01 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:38.282 05:10:01 -- common/autotest_common.sh@1330 -- # shift 00:29:38.282 05:10:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:29:38.282 05:10:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.282 05:10:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:38.282 05:10:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:29:38.282 05:10:01 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:29:38.282 05:10:01 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:29:38.282 05:10:01 -- common/autotest_common.sh@1336 -- # break 00:29:38.282 05:10:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:38.282 05:10:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:38.541 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:38.541 fio-3.35 00:29:38.541 Starting 1 thread 00:29:50.749 00:29:50.749 job_raid5f: (groupid=0, jobs=1): err= 0: pid=95017: Mon Nov 18 05:10:12 2024 00:29:50.749 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(417MiB/10001msec) 00:29:50.749 slat (usec): min=19, max=468, avg=22.81, stdev= 6.27 00:29:50.749 clat (usec): min=11, max=782, avg=147.39, stdev=58.08 00:29:50.749 lat (usec): min=33, max=809, avg=170.19, stdev=59.37 00:29:50.749 clat percentiles (usec): 00:29:50.749 | 50.000th=[ 145], 99.000th=[ 277], 99.900th=[ 412], 99.990th=[ 676], 00:29:50.749 | 99.999th=[ 758] 00:29:50.749 write: IOPS=11.2k, BW=43.9MiB/s (46.0MB/s)(433MiB/9874msec); 0 zone resets 00:29:50.749 slat (usec): min=9, max=303, avg=19.82, stdev= 6.37 00:29:50.749 clat (usec): min=60, max=1041, avg=336.59, stdev=58.87 00:29:50.749 lat (usec): min=78, max=1061, avg=356.41, stdev=60.90 00:29:50.749 clat percentiles (usec): 00:29:50.749 | 50.000th=[ 334], 99.000th=[ 523], 99.900th=[ 750], 99.990th=[ 955], 00:29:50.749 | 99.999th=[ 1029] 00:29:50.749 bw ( KiB/s): min=41112, max=48480, per=98.78%, avg=44370.95, stdev=2187.24, samples=19 00:29:50.749 iops : min=10278, max=12120, avg=11092.74, stdev=546.81, samples=19 00:29:50.749 lat (usec) : 20=0.01%, 50=0.01%, 100=11.35%, 250=38.68%, 500=49.25% 00:29:50.749 lat (usec) : 750=0.66%, 1000=0.05% 00:29:50.749 lat (msec) : 2=0.01% 00:29:50.749 cpu : usr=99.06%, sys=0.89%, ctx=48, majf=0, minf=8950 00:29:50.749 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:50.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.749 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.749 issued rwts: total=106636,110875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.749 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:50.749 00:29:50.749 Run status group 0 (all jobs): 00:29:50.749 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=417MiB (437MB), run=10001-10001msec 00:29:50.749 WRITE: bw=43.9MiB/s (46.0MB/s), 43.9MiB/s-43.9MiB/s (46.0MB/s-46.0MB/s), io=433MiB (454MB), run=9874-9874msec 00:29:50.749 ----------------------------------------------------- 00:29:50.749 Suppressions used: 00:29:50.749 count bytes template 00:29:50.749 1 7 /usr/src/fio/parse.c 00:29:50.749 912 87552 /usr/src/fio/iolog.c 00:29:50.749 1 904 libcrypto.so 00:29:50.749 ----------------------------------------------------- 00:29:50.749 00:29:50.749 00:29:50.749 real 0m12.201s 00:29:50.749 user 0m12.794s 00:29:50.749 sys 0m0.688s 00:29:50.749 05:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:50.749 ************************************ 00:29:50.749 END TEST bdev_fio_rw_verify 00:29:50.749 ************************************ 00:29:50.749 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:29:50.749 05:10:13 -- bdev/blockdev.sh@348 -- # rm -f 00:29:50.749 05:10:13 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:50.749 05:10:13 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:29:50.749 05:10:13 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:50.749 05:10:13 -- common/autotest_common.sh@1270 -- # local workload=trim 00:29:50.749 05:10:13 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:29:50.749 05:10:13 -- common/autotest_common.sh@1272 -- # local env_context= 00:29:50.749 05:10:13 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:29:50.749 05:10:13 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:50.749 05:10:13 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:29:50.749 05:10:13 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:29:50.749 05:10:13 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:50.749 05:10:13 -- common/autotest_common.sh@1290 -- # cat 00:29:50.749 05:10:13 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:29:50.749 05:10:13 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:29:50.749 05:10:13 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:29:50.750 05:10:13 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "55a1c3fb-6bdd-4b9b-aa4e-748f86639211"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "55a1c3fb-6bdd-4b9b-aa4e-748f86639211",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "55a1c3fb-6bdd-4b9b-aa4e-748f86639211",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b87886ec-4c26-4e0b-88ee-b0d61820b3ae",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "6daed8f8-39bd-42d1-bcef-985993859220",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "bda4e55c-7d93-496a-b78c-7708b3717a97",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:50.750 05:10:13 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:29:50.750 05:10:13 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:29:50.750 05:10:13 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:50.750 05:10:13 -- bdev/blockdev.sh@360 -- # popd 00:29:50.750 /home/vagrant/spdk_repo/spdk 00:29:50.750 05:10:13 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:29:50.750 05:10:13 -- bdev/blockdev.sh@362 -- # return 0 00:29:50.750 00:29:50.750 real 0m12.339s 00:29:50.750 user 0m12.853s 00:29:50.750 sys 0m0.770s 00:29:50.750 05:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:50.750 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:29:50.750 ************************************ 00:29:50.750 END TEST bdev_fio 00:29:50.750 ************************************ 00:29:50.750 05:10:13 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:50.750 05:10:13 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:50.750 05:10:13 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:29:50.750 05:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:50.750 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:29:50.750 ************************************ 00:29:50.750 START TEST bdev_verify 00:29:50.750 ************************************ 00:29:50.750 05:10:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:50.750 [2024-11-18 05:10:14.005164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:50.750 [2024-11-18 05:10:14.005342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95179 ] 00:29:50.750 [2024-11-18 05:10:14.176245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:51.009 [2024-11-18 05:10:14.330691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.009 [2024-11-18 05:10:14.330712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.268 Running I/O for 5 seconds... 00:29:56.537 00:29:56.537 Latency(us) 00:29:56.537 [2024-11-18T05:10:20.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.537 [2024-11-18T05:10:20.061Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:56.537 Verification LBA range: start 0x0 length 0x2000 00:29:56.537 raid5f : 5.01 11915.38 46.54 0.00 0.00 17023.59 240.17 17396.83 00:29:56.537 [2024-11-18T05:10:20.061Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:56.537 Verification LBA range: start 0x2000 length 0x2000 00:29:56.537 raid5f : 5.01 11977.62 46.79 0.00 0.00 16933.48 370.50 17396.83 00:29:56.537 [2024-11-18T05:10:20.061Z] =================================================================================================================== 00:29:56.537 [2024-11-18T05:10:20.061Z] Total : 23893.00 93.33 0.00 0.00 16978.42 240.17 17396.83 00:29:57.472 00:29:57.472 real 0m6.935s 00:29:57.472 user 0m12.765s 00:29:57.472 sys 0m0.259s 00:29:57.472 05:10:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:57.472 05:10:20 -- common/autotest_common.sh@10 -- # set +x 00:29:57.472 ************************************ 00:29:57.472 END TEST bdev_verify 00:29:57.472 ************************************ 00:29:57.472 05:10:20 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:57.472 05:10:20 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:29:57.472 05:10:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:57.472 05:10:20 -- common/autotest_common.sh@10 -- # set +x 00:29:57.472 ************************************ 00:29:57.472 START TEST bdev_verify_big_io 00:29:57.472 ************************************ 00:29:57.472 05:10:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:57.472 [2024-11-18 05:10:20.985632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:57.472 [2024-11-18 05:10:20.985801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95267 ] 00:29:57.731 [2024-11-18 05:10:21.156750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:57.990 [2024-11-18 05:10:21.310872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.990 [2024-11-18 05:10:21.310894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.248 Running I/O for 5 seconds... 00:30:03.599 00:30:03.599 Latency(us) 00:30:03.599 [2024-11-18T05:10:27.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.599 [2024-11-18T05:10:27.123Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:03.599 Verification LBA range: start 0x0 length 0x200 00:30:03.599 raid5f : 5.12 798.15 49.88 0.00 0.00 4191088.55 134.98 148707.14 00:30:03.599 [2024-11-18T05:10:27.123Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:03.599 Verification LBA range: start 0x200 length 0x200 00:30:03.599 raid5f : 5.12 803.00 50.19 0.00 0.00 4164558.40 119.16 147753.89 00:30:03.599 [2024-11-18T05:10:27.123Z] =================================================================================================================== 00:30:03.599 [2024-11-18T05:10:27.123Z] Total : 1601.15 100.07 0.00 0.00 4177783.04 119.16 148707.14 00:30:04.537 00:30:04.537 real 0m7.060s 00:30:04.537 user 0m13.024s 00:30:04.537 sys 0m0.250s 00:30:04.537 ************************************ 00:30:04.537 END TEST bdev_verify_big_io 00:30:04.537 ************************************ 00:30:04.537 05:10:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:04.537 05:10:27 -- common/autotest_common.sh@10 -- # set +x 00:30:04.537 05:10:28 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:04.537 05:10:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:04.537 05:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:04.537 05:10:28 -- common/autotest_common.sh@10 -- # set +x 00:30:04.537 ************************************ 00:30:04.537 START TEST bdev_write_zeroes 00:30:04.537 ************************************ 00:30:04.537 05:10:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:04.796 [2024-11-18 05:10:28.085967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:04.796 [2024-11-18 05:10:28.086118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95360 ] 00:30:04.797 [2024-11-18 05:10:28.239688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.056 [2024-11-18 05:10:28.389207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.315 Running I/O for 1 seconds... 00:30:06.695 00:30:06.695 Latency(us) 00:30:06.695 [2024-11-18T05:10:30.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.695 [2024-11-18T05:10:30.219Z] Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:06.696 raid5f : 1.00 26965.40 105.33 0.00 0.00 4733.38 1571.37 6285.50 00:30:06.696 [2024-11-18T05:10:30.220Z] =================================================================================================================== 00:30:06.696 [2024-11-18T05:10:30.220Z] Total : 26965.40 105.33 0.00 0.00 4733.38 1571.37 6285.50 00:30:07.633 00:30:07.633 real 0m2.858s 00:30:07.633 user 0m2.545s 00:30:07.633 sys 0m0.204s 00:30:07.633 05:10:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:07.633 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:30:07.633 ************************************ 00:30:07.633 END TEST bdev_write_zeroes 00:30:07.633 ************************************ 00:30:07.633 05:10:30 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:07.633 05:10:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:07.633 05:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:07.633 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:30:07.633 ************************************ 00:30:07.633 START TEST bdev_json_nonenclosed 00:30:07.633 ************************************ 00:30:07.633 05:10:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:07.633 [2024-11-18 05:10:31.011002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:07.633 [2024-11-18 05:10:31.011183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95402 ] 00:30:07.892 [2024-11-18 05:10:31.180557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.892 [2024-11-18 05:10:31.332074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.892 [2024-11-18 05:10:31.332270] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:07.892 [2024-11-18 05:10:31.332295] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:08.460 00:30:08.460 real 0m0.726s 00:30:08.460 user 0m0.514s 00:30:08.460 sys 0m0.111s 00:30:08.460 05:10:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:08.460 ************************************ 00:30:08.460 END TEST bdev_json_nonenclosed 00:30:08.460 ************************************ 00:30:08.460 05:10:31 -- common/autotest_common.sh@10 -- # set +x 00:30:08.460 05:10:31 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:08.460 05:10:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:08.460 05:10:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:08.460 05:10:31 -- common/autotest_common.sh@10 -- # set +x 00:30:08.460 ************************************ 00:30:08.460 START TEST bdev_json_nonarray 00:30:08.460 ************************************ 00:30:08.460 05:10:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:08.460 [2024-11-18 05:10:31.767544] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:08.460 [2024-11-18 05:10:31.768055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95433 ] 00:30:08.460 [2024-11-18 05:10:31.920361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.719 [2024-11-18 05:10:32.068311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.719 [2024-11-18 05:10:32.068526] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:08.719 [2024-11-18 05:10:32.068557] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:08.978 00:30:08.978 real 0m0.683s 00:30:08.978 user 0m0.480s 00:30:08.978 sys 0m0.103s 00:30:08.978 05:10:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:08.978 ************************************ 00:30:08.978 05:10:32 -- common/autotest_common.sh@10 -- # set +x 00:30:08.978 END TEST bdev_json_nonarray 00:30:08.978 ************************************ 00:30:08.978 05:10:32 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:30:08.978 05:10:32 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:30:08.978 05:10:32 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:30:08.978 05:10:32 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:08.978 05:10:32 -- bdev/blockdev.sh@809 -- # cleanup 00:30:08.978 05:10:32 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:08.978 05:10:32 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:08.978 05:10:32 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:30:08.978 05:10:32 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:30:08.978 05:10:32 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:30:08.978 05:10:32 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:30:08.978 ************************************ 00:30:08.978 END TEST blockdev_raid5f 00:30:08.978 ************************************ 00:30:08.978 00:30:08.978 real 0m44.684s 00:30:08.978 user 1m1.622s 00:30:08.978 sys 0m4.162s 00:30:08.978 05:10:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:08.978 05:10:32 -- common/autotest_common.sh@10 -- # set +x 00:30:09.236 05:10:32 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:30:09.237 05:10:32 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:30:09.237 05:10:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:09.237 05:10:32 -- common/autotest_common.sh@10 -- # set +x 00:30:09.237 05:10:32 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:30:09.237 05:10:32 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:30:09.237 05:10:32 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:30:09.237 05:10:32 -- common/autotest_common.sh@10 -- # set +x 00:30:11.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:30:11.142 Waiting for block devices as requested 00:30:11.142 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:11.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:30:11.659 Cleaning 00:30:11.659 Removing: /var/run/dpdk/spdk0/config 00:30:11.659 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:11.659 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:11.659 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:11.659 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:11.659 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:11.659 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:11.659 Removing: /dev/shm/spdk_tgt_trace.pid60472 00:30:11.659 Removing: /var/run/dpdk/spdk0 00:30:11.659 Removing: /var/run/dpdk/spdk_pid60260 00:30:11.659 Removing: /var/run/dpdk/spdk_pid60472 00:30:11.659 Removing: /var/run/dpdk/spdk_pid60751 00:30:11.659 Removing: /var/run/dpdk/spdk_pid60995 00:30:11.659 Removing: /var/run/dpdk/spdk_pid61173 00:30:11.659 Removing: /var/run/dpdk/spdk_pid61285 00:30:11.659 Removing: /var/run/dpdk/spdk_pid61398 00:30:11.659 Removing: /var/run/dpdk/spdk_pid61527 00:30:11.659 Removing: /var/run/dpdk/spdk_pid61636 00:30:11.659 Removing: /var/run/dpdk/spdk_pid61681 00:30:11.659 Removing: /var/run/dpdk/spdk_pid61712 00:30:11.659 Removing: /var/run/dpdk/spdk_pid61787 00:30:11.659 Removing: /var/run/dpdk/spdk_pid61893 00:30:11.659 Removing: /var/run/dpdk/spdk_pid62394 00:30:11.659 Removing: /var/run/dpdk/spdk_pid62471 00:30:11.659 Removing: /var/run/dpdk/spdk_pid62542 00:30:11.659 Removing: /var/run/dpdk/spdk_pid62571 00:30:11.659 Removing: /var/run/dpdk/spdk_pid62716 00:30:11.659 Removing: /var/run/dpdk/spdk_pid62738 00:30:11.659 Removing: /var/run/dpdk/spdk_pid62873 00:30:11.659 Removing: /var/run/dpdk/spdk_pid62902 00:30:11.659 Removing: /var/run/dpdk/spdk_pid62966 00:30:11.659 Removing: /var/run/dpdk/spdk_pid62992 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63056 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63081 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63267 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63309 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63345 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63433 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63516 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63547 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63625 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63651 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63699 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63725 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63771 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63803 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63844 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63876 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63917 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63947 00:30:11.659 Removing: /var/run/dpdk/spdk_pid63995 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64021 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64062 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64088 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64135 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64161 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64202 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64234 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64280 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64306 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64347 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64379 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64421 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64447 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64499 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64525 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64566 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64596 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64644 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64670 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64711 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64743 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64784 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64813 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64863 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64897 00:30:11.659 Removing: /var/run/dpdk/spdk_pid64941 00:30:11.659 Removing: /var/run/dpdk/spdk_pid65090 00:30:11.659 Removing: /var/run/dpdk/spdk_pid65131 00:30:11.659 Removing: /var/run/dpdk/spdk_pid65157 00:30:11.919 Removing: /var/run/dpdk/spdk_pid65205 00:30:11.919 Removing: /var/run/dpdk/spdk_pid65294 00:30:11.919 Removing: /var/run/dpdk/spdk_pid65410 00:30:11.919 Removing: /var/run/dpdk/spdk_pid65598 00:30:11.919 Removing: /var/run/dpdk/spdk_pid65677 00:30:11.919 Removing: /var/run/dpdk/spdk_pid65726 00:30:11.919 Removing: /var/run/dpdk/spdk_pid66933 00:30:11.919 Removing: /var/run/dpdk/spdk_pid67133 00:30:11.919 Removing: /var/run/dpdk/spdk_pid67330 00:30:11.919 Removing: /var/run/dpdk/spdk_pid67439 00:30:11.919 Removing: /var/run/dpdk/spdk_pid67565 00:30:11.919 Removing: /var/run/dpdk/spdk_pid67628 00:30:11.919 Removing: /var/run/dpdk/spdk_pid67655 00:30:11.919 Removing: /var/run/dpdk/spdk_pid67686 00:30:11.919 Removing: /var/run/dpdk/spdk_pid68104 00:30:11.919 Removing: /var/run/dpdk/spdk_pid68181 00:30:11.919 Removing: /var/run/dpdk/spdk_pid68288 00:30:11.919 Removing: /var/run/dpdk/spdk_pid68346 00:30:11.919 Removing: /var/run/dpdk/spdk_pid69467 00:30:11.919 Removing: /var/run/dpdk/spdk_pid70286 00:30:11.919 Removing: /var/run/dpdk/spdk_pid71104 00:30:11.919 Removing: /var/run/dpdk/spdk_pid72113 00:30:11.919 Removing: /var/run/dpdk/spdk_pid73083 00:30:11.919 Removing: /var/run/dpdk/spdk_pid74049 00:30:11.919 Removing: /var/run/dpdk/spdk_pid75395 00:30:11.919 Removing: /var/run/dpdk/spdk_pid76488 00:30:11.919 Removing: /var/run/dpdk/spdk_pid77585 00:30:11.919 Removing: /var/run/dpdk/spdk_pid78200 00:30:11.919 Removing: /var/run/dpdk/spdk_pid78707 00:30:11.919 Removing: /var/run/dpdk/spdk_pid79290 00:30:11.919 Removing: /var/run/dpdk/spdk_pid79727 00:30:11.919 Removing: /var/run/dpdk/spdk_pid80237 00:30:11.919 Removing: /var/run/dpdk/spdk_pid80734 00:30:11.919 Removing: /var/run/dpdk/spdk_pid81325 00:30:11.919 Removing: /var/run/dpdk/spdk_pid81804 00:30:11.919 Removing: /var/run/dpdk/spdk_pid83034 00:30:11.919 Removing: /var/run/dpdk/spdk_pid83561 00:30:11.919 Removing: /var/run/dpdk/spdk_pid84049 00:30:11.919 Removing: /var/run/dpdk/spdk_pid85382 00:30:11.919 Removing: /var/run/dpdk/spdk_pid85971 00:30:11.919 Removing: /var/run/dpdk/spdk_pid86535 00:30:11.919 Removing: /var/run/dpdk/spdk_pid87236 00:30:11.919 Removing: /var/run/dpdk/spdk_pid87284 00:30:11.919 Removing: /var/run/dpdk/spdk_pid87331 00:30:11.919 Removing: /var/run/dpdk/spdk_pid87381 00:30:11.919 Removing: /var/run/dpdk/spdk_pid87525 00:30:11.919 Removing: /var/run/dpdk/spdk_pid87668 00:30:11.919 Removing: /var/run/dpdk/spdk_pid87894 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88167 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88180 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88223 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88244 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88268 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88298 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88317 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88343 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88373 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88392 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88418 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88448 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88467 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88493 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88517 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88542 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88562 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88592 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88617 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88637 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88683 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88703 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88743 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88824 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88858 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88880 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88915 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88941 00:30:11.919 Removing: /var/run/dpdk/spdk_pid88957 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89010 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89028 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89067 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89085 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89106 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89120 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89140 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89158 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89179 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89193 00:30:11.919 Removing: /var/run/dpdk/spdk_pid89232 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89269 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89292 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89327 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89349 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89363 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89416 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89438 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89474 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89492 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89506 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89530 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89545 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89565 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89579 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89599 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89692 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89763 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89906 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89923 00:30:12.178 Removing: /var/run/dpdk/spdk_pid89967 00:30:12.178 Removing: /var/run/dpdk/spdk_pid90023 00:30:12.178 Removing: /var/run/dpdk/spdk_pid90051 00:30:12.178 Removing: /var/run/dpdk/spdk_pid90082 00:30:12.178 Removing: /var/run/dpdk/spdk_pid90103 00:30:12.178 Removing: /var/run/dpdk/spdk_pid90145 00:30:12.178 Removing: /var/run/dpdk/spdk_pid90176 00:30:12.178 Removing: /var/run/dpdk/spdk_pid90253 00:30:12.178 Removing: /var/run/dpdk/spdk_pid90309 00:30:12.179 Removing: /var/run/dpdk/spdk_pid90354 00:30:12.179 Removing: /var/run/dpdk/spdk_pid90596 00:30:12.179 Removing: /var/run/dpdk/spdk_pid90703 00:30:12.179 Removing: /var/run/dpdk/spdk_pid90737 00:30:12.179 Removing: /var/run/dpdk/spdk_pid90832 00:30:12.179 Removing: /var/run/dpdk/spdk_pid90905 00:30:12.179 Removing: /var/run/dpdk/spdk_pid90943 00:30:12.179 Removing: /var/run/dpdk/spdk_pid91173 00:30:12.179 Removing: /var/run/dpdk/spdk_pid91314 00:30:12.179 Removing: /var/run/dpdk/spdk_pid91407 00:30:12.179 Removing: /var/run/dpdk/spdk_pid91455 00:30:12.179 Removing: /var/run/dpdk/spdk_pid91481 00:30:12.179 Removing: /var/run/dpdk/spdk_pid91558 00:30:12.179 Removing: /var/run/dpdk/spdk_pid91958 00:30:12.179 Removing: /var/run/dpdk/spdk_pid92000 00:30:12.179 Removing: /var/run/dpdk/spdk_pid92285 00:30:12.179 Removing: /var/run/dpdk/spdk_pid92379 00:30:12.179 Removing: /var/run/dpdk/spdk_pid92478 00:30:12.179 Removing: /var/run/dpdk/spdk_pid92520 00:30:12.179 Removing: /var/run/dpdk/spdk_pid92550 00:30:12.179 Removing: /var/run/dpdk/spdk_pid92577 00:30:12.179 Removing: /var/run/dpdk/spdk_pid93776 00:30:12.179 Removing: /var/run/dpdk/spdk_pid93912 00:30:12.179 Removing: /var/run/dpdk/spdk_pid93916 00:30:12.179 Removing: /var/run/dpdk/spdk_pid93933 00:30:12.179 Removing: /var/run/dpdk/spdk_pid94391 00:30:12.179 Removing: /var/run/dpdk/spdk_pid94495 00:30:12.179 Removing: /var/run/dpdk/spdk_pid94643 00:30:12.179 Removing: /var/run/dpdk/spdk_pid94707 00:30:12.179 Removing: /var/run/dpdk/spdk_pid94745 00:30:12.179 Removing: /var/run/dpdk/spdk_pid95008 00:30:12.179 Removing: /var/run/dpdk/spdk_pid95179 00:30:12.179 Removing: /var/run/dpdk/spdk_pid95267 00:30:12.179 Removing: /var/run/dpdk/spdk_pid95360 00:30:12.179 Removing: /var/run/dpdk/spdk_pid95402 00:30:12.179 Removing: /var/run/dpdk/spdk_pid95433 00:30:12.179 Clean 00:30:12.438 killing process with pid 51358 00:30:12.438 killing process with pid 51367 00:30:12.438 05:10:35 -- common/autotest_common.sh@1446 -- # return 0 00:30:12.438 05:10:35 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:30:12.438 05:10:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:12.438 05:10:35 -- common/autotest_common.sh@10 -- # set +x 00:30:12.438 05:10:35 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:30:12.438 05:10:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:12.438 05:10:35 -- common/autotest_common.sh@10 -- # set +x 00:30:12.438 05:10:35 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:12.438 05:10:35 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:12.438 05:10:35 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:12.438 05:10:35 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:30:12.438 05:10:35 -- spdk/autotest.sh@383 -- # hostname 00:30:12.438 05:10:35 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:12.696 geninfo: WARNING: invalid characters removed from testname! 00:30:59.378 05:11:20 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:01.912 05:11:25 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:05.200 05:11:28 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:07.737 05:11:30 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:10.274 05:11:33 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:13.563 05:11:36 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:16.124 05:11:39 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:16.124 05:11:39 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:31:16.124 05:11:39 -- common/autotest_common.sh@1690 -- $ lcov --version 00:31:16.124 05:11:39 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:31:16.124 05:11:39 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:31:16.124 05:11:39 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:31:16.124 05:11:39 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:31:16.124 05:11:39 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:31:16.124 05:11:39 -- scripts/common.sh@335 -- $ IFS=.-: 00:31:16.124 05:11:39 -- scripts/common.sh@335 -- $ read -ra ver1 00:31:16.124 05:11:39 -- scripts/common.sh@336 -- $ IFS=.-: 00:31:16.124 05:11:39 -- scripts/common.sh@336 -- $ read -ra ver2 00:31:16.124 05:11:39 -- scripts/common.sh@337 -- $ local 'op=<' 00:31:16.124 05:11:39 -- scripts/common.sh@339 -- $ ver1_l=2 00:31:16.124 05:11:39 -- scripts/common.sh@340 -- $ ver2_l=1 00:31:16.124 05:11:39 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:31:16.124 05:11:39 -- scripts/common.sh@343 -- $ case "$op" in 00:31:16.124 05:11:39 -- scripts/common.sh@344 -- $ : 1 00:31:16.125 05:11:39 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:31:16.125 05:11:39 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.125 05:11:39 -- scripts/common.sh@364 -- $ decimal 1 00:31:16.125 05:11:39 -- scripts/common.sh@352 -- $ local d=1 00:31:16.125 05:11:39 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:31:16.125 05:11:39 -- scripts/common.sh@354 -- $ echo 1 00:31:16.125 05:11:39 -- scripts/common.sh@364 -- $ ver1[v]=1 00:31:16.125 05:11:39 -- scripts/common.sh@365 -- $ decimal 2 00:31:16.125 05:11:39 -- scripts/common.sh@352 -- $ local d=2 00:31:16.125 05:11:39 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:31:16.125 05:11:39 -- scripts/common.sh@354 -- $ echo 2 00:31:16.125 05:11:39 -- scripts/common.sh@365 -- $ ver2[v]=2 00:31:16.125 05:11:39 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:31:16.125 05:11:39 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:31:16.125 05:11:39 -- scripts/common.sh@367 -- $ return 0 00:31:16.125 05:11:39 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.125 05:11:39 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:31:16.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.125 --rc genhtml_branch_coverage=1 00:31:16.125 --rc genhtml_function_coverage=1 00:31:16.125 --rc genhtml_legend=1 00:31:16.125 --rc geninfo_all_blocks=1 00:31:16.125 --rc geninfo_unexecuted_blocks=1 00:31:16.125 00:31:16.125 ' 00:31:16.125 05:11:39 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:31:16.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.125 --rc genhtml_branch_coverage=1 00:31:16.125 --rc genhtml_function_coverage=1 00:31:16.125 --rc genhtml_legend=1 00:31:16.125 --rc geninfo_all_blocks=1 00:31:16.125 --rc geninfo_unexecuted_blocks=1 00:31:16.125 00:31:16.125 ' 00:31:16.125 05:11:39 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:31:16.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.125 --rc genhtml_branch_coverage=1 00:31:16.125 --rc genhtml_function_coverage=1 00:31:16.125 --rc genhtml_legend=1 00:31:16.125 --rc geninfo_all_blocks=1 00:31:16.125 --rc geninfo_unexecuted_blocks=1 00:31:16.125 00:31:16.125 ' 00:31:16.125 05:11:39 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:31:16.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.125 --rc genhtml_branch_coverage=1 00:31:16.125 --rc genhtml_function_coverage=1 00:31:16.125 --rc genhtml_legend=1 00:31:16.125 --rc geninfo_all_blocks=1 00:31:16.125 --rc geninfo_unexecuted_blocks=1 00:31:16.125 00:31:16.125 ' 00:31:16.125 05:11:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:16.125 05:11:39 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:16.125 05:11:39 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.125 05:11:39 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.125 05:11:39 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:16.125 05:11:39 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:16.125 05:11:39 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:16.125 05:11:39 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:16.125 05:11:39 -- paths/export.sh@6 -- $ export PATH 00:31:16.125 05:11:39 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:16.125 05:11:39 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:16.125 05:11:39 -- common/autobuild_common.sh@440 -- $ date +%s 00:31:16.125 05:11:39 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731906699.XXXXXX 00:31:16.125 05:11:39 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731906699.j1UCjH 00:31:16.125 05:11:39 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:31:16.125 05:11:39 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:31:16.125 05:11:39 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:16.125 05:11:39 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:16.125 05:11:39 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:16.125 05:11:39 -- common/autobuild_common.sh@456 -- $ get_config_params 00:31:16.125 05:11:39 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:31:16.125 05:11:39 -- common/autotest_common.sh@10 -- $ set +x 00:31:16.125 05:11:39 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:31:16.125 05:11:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:16.125 05:11:39 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:16.125 05:11:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:16.125 05:11:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:16.125 05:11:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:16.125 05:11:39 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:31:16.125 05:11:39 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:31:16.125 05:11:39 -- common/autotest_common.sh@10 -- $ set +x 00:31:16.125 05:11:39 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:31:16.125 05:11:39 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:31:16.125 05:11:39 -- spdk/autopackage.sh@40 -- $ get_config_params 00:31:16.125 05:11:39 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:31:16.125 05:11:39 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:31:16.125 05:11:39 -- common/autotest_common.sh@10 -- $ set +x 00:31:16.125 05:11:39 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:31:16.125 05:11:39 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --enable-lto 00:31:16.125 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:31:16.125 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:31:16.692 Using 'verbs' RDMA provider 00:31:29.470 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:31:41.682 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:31:41.682 Creating mk/config.mk...done. 00:31:41.682 Creating mk/cc.flags.mk...done. 00:31:41.682 Type 'make' to build. 00:31:41.682 05:12:04 -- spdk/autopackage.sh@43 -- $ make -j10 00:31:41.682 make[1]: Nothing to be done for 'all'. 00:31:45.972 The Meson build system 00:31:45.972 Version: 1.4.1 00:31:45.972 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:31:45.972 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:31:45.972 Build type: native build 00:31:45.972 Program cat found: YES (/usr/bin/cat) 00:31:45.972 Project name: DPDK 00:31:45.972 Project version: 23.11.0 00:31:45.972 C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:31:45.972 C linker for the host machine: cc ld.bfd 2.42 00:31:45.972 Host machine cpu family: x86_64 00:31:45.972 Host machine cpu: x86_64 00:31:45.972 Message: ## Building in Developer Mode ## 00:31:45.972 Program pkg-config found: YES (/usr/bin/pkg-config) 00:31:45.972 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:31:45.972 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:31:45.972 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:31:45.972 Program cat found: YES (/usr/bin/cat) 00:31:45.972 Compiler for C supports arguments -march=native: YES 00:31:45.972 Checking for size of "void *" : 8 00:31:45.972 Checking for size of "void *" : 8 (cached) 00:31:45.972 Library m found: YES 00:31:45.972 Library numa found: YES 00:31:45.972 Has header "numaif.h" : YES 00:31:45.972 Library fdt found: NO 00:31:45.972 Library execinfo found: NO 00:31:45.972 Has header "execinfo.h" : YES 00:31:45.972 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:31:45.972 Run-time dependency libarchive found: NO (tried pkgconfig) 00:31:45.972 Run-time dependency libbsd found: NO (tried pkgconfig) 00:31:45.972 Run-time dependency jansson found: NO (tried pkgconfig) 00:31:45.972 Run-time dependency openssl found: YES 3.0.13 00:31:45.972 Run-time dependency libpcap found: NO (tried pkgconfig) 00:31:45.972 Library pcap found: NO 00:31:45.972 Compiler for C supports arguments -Wcast-qual: YES 00:31:45.972 Compiler for C supports arguments -Wdeprecated: YES 00:31:45.972 Compiler for C supports arguments -Wformat: YES 00:31:45.972 Compiler for C supports arguments -Wformat-nonliteral: YES 00:31:45.972 Compiler for C supports arguments -Wformat-security: YES 00:31:45.972 Compiler for C supports arguments -Wmissing-declarations: YES 00:31:45.972 Compiler for C supports arguments -Wmissing-prototypes: YES 00:31:45.972 Compiler for C supports arguments -Wnested-externs: YES 00:31:45.972 Compiler for C supports arguments -Wold-style-definition: YES 00:31:45.972 Compiler for C supports arguments -Wpointer-arith: YES 00:31:45.972 Compiler for C supports arguments -Wsign-compare: YES 00:31:45.972 Compiler for C supports arguments -Wstrict-prototypes: YES 00:31:45.972 Compiler for C supports arguments -Wundef: YES 00:31:45.972 Compiler for C supports arguments -Wwrite-strings: YES 00:31:45.972 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:31:45.972 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:31:45.973 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:31:45.973 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:31:45.973 Program objdump found: YES (/usr/bin/objdump) 00:31:45.973 Compiler for C supports arguments -mavx512f: YES 00:31:45.973 Checking if "AVX512 checking" compiles: YES 00:31:45.973 Fetching value of define "__SSE4_2__" : 1 00:31:45.973 Fetching value of define "__AES__" : 1 00:31:45.973 Fetching value of define "__AVX__" : 1 00:31:45.973 Fetching value of define "__AVX2__" : 1 00:31:45.973 Fetching value of define "__AVX512BW__" : (undefined) 00:31:45.973 Fetching value of define "__AVX512CD__" : (undefined) 00:31:45.973 Fetching value of define "__AVX512DQ__" : (undefined) 00:31:45.973 Fetching value of define "__AVX512F__" : (undefined) 00:31:45.973 Fetching value of define "__AVX512VL__" : (undefined) 00:31:45.973 Fetching value of define "__PCLMUL__" : 1 00:31:45.973 Fetching value of define "__RDRND__" : 1 00:31:45.973 Fetching value of define "__RDSEED__" : 1 00:31:45.973 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:31:45.973 Fetching value of define "__znver1__" : (undefined) 00:31:45.973 Fetching value of define "__znver2__" : (undefined) 00:31:45.973 Fetching value of define "__znver3__" : (undefined) 00:31:45.973 Fetching value of define "__znver4__" : (undefined) 00:31:45.973 Compiler for C supports arguments -ffat-lto-objects: YES 00:31:45.973 Library asan found: YES 00:31:45.973 Compiler for C supports arguments -Wno-format-truncation: YES 00:31:45.973 Message: lib/log: Defining dependency "log" 00:31:45.973 Message: lib/kvargs: Defining dependency "kvargs" 00:31:45.973 Message: lib/telemetry: Defining dependency "telemetry" 00:31:45.973 Library rt found: YES 00:31:45.973 Checking for function "getentropy" : NO 00:31:45.973 Message: lib/eal: Defining dependency "eal" 00:31:45.973 Message: lib/ring: Defining dependency "ring" 00:31:45.973 Message: lib/rcu: Defining dependency "rcu" 00:31:45.973 Message: lib/mempool: Defining dependency "mempool" 00:31:45.973 Message: lib/mbuf: Defining dependency "mbuf" 00:31:45.973 Fetching value of define "__PCLMUL__" : 1 (cached) 00:31:45.973 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:31:45.973 Compiler for C supports arguments -mpclmul: YES 00:31:45.973 Compiler for C supports arguments -maes: YES 00:31:45.973 Compiler for C supports arguments -mavx512f: YES (cached) 00:31:45.973 Compiler for C supports arguments -mavx512bw: YES 00:31:45.973 Compiler for C supports arguments -mavx512dq: YES 00:31:45.973 Compiler for C supports arguments -mavx512vl: YES 00:31:45.973 Compiler for C supports arguments -mvpclmulqdq: YES 00:31:45.973 Compiler for C supports arguments -mavx2: YES 00:31:45.973 Compiler for C supports arguments -mavx: YES 00:31:45.973 Message: lib/net: Defining dependency "net" 00:31:45.973 Message: lib/meter: Defining dependency "meter" 00:31:45.973 Message: lib/ethdev: Defining dependency "ethdev" 00:31:45.973 Message: lib/pci: Defining dependency "pci" 00:31:45.973 Message: lib/cmdline: Defining dependency "cmdline" 00:31:45.973 Message: lib/hash: Defining dependency "hash" 00:31:45.973 Message: lib/timer: Defining dependency "timer" 00:31:45.973 Message: lib/compressdev: Defining dependency "compressdev" 00:31:45.973 Message: lib/cryptodev: Defining dependency "cryptodev" 00:31:45.973 Message: lib/dmadev: Defining dependency "dmadev" 00:31:45.973 Compiler for C supports arguments -Wno-cast-qual: YES 00:31:45.973 Message: lib/power: Defining dependency "power" 00:31:45.973 Message: lib/reorder: Defining dependency "reorder" 00:31:45.973 Message: lib/security: Defining dependency "security" 00:31:45.973 Has header "linux/userfaultfd.h" : YES 00:31:45.973 Has header "linux/vduse.h" : YES 00:31:45.973 Message: lib/vhost: Defining dependency "vhost" 00:31:45.973 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:31:45.973 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:31:45.973 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:31:45.973 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:31:45.973 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:31:45.973 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:31:45.973 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:31:45.973 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:31:45.973 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:31:45.973 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:31:45.973 Program doxygen found: YES (/usr/bin/doxygen) 00:31:45.973 Configuring doxy-api-html.conf using configuration 00:31:45.973 Configuring doxy-api-man.conf using configuration 00:31:45.973 Program mandb found: YES (/usr/bin/mandb) 00:31:45.973 Program sphinx-build found: NO 00:31:45.973 Configuring rte_build_config.h using configuration 00:31:45.973 Message: 00:31:45.973 ================= 00:31:45.973 Applications Enabled 00:31:45.973 ================= 00:31:45.973 00:31:45.973 apps: 00:31:45.973 00:31:45.973 00:31:45.973 Message: 00:31:45.973 ================= 00:31:45.973 Libraries Enabled 00:31:45.973 ================= 00:31:45.973 00:31:45.973 libs: 00:31:45.973 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:31:45.973 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:31:45.973 cryptodev, dmadev, power, reorder, security, vhost, 00:31:45.973 00:31:45.973 Message: 00:31:45.973 =============== 00:31:45.973 Drivers Enabled 00:31:45.973 =============== 00:31:45.973 00:31:45.973 common: 00:31:45.973 00:31:45.973 bus: 00:31:45.973 pci, vdev, 00:31:45.973 mempool: 00:31:45.973 ring, 00:31:45.973 dma: 00:31:45.973 00:31:45.973 net: 00:31:45.973 00:31:45.973 crypto: 00:31:45.973 00:31:45.973 compress: 00:31:45.973 00:31:45.973 vdpa: 00:31:45.973 00:31:45.973 00:31:45.973 Message: 00:31:45.973 ================= 00:31:45.973 Content Skipped 00:31:45.973 ================= 00:31:45.973 00:31:45.973 apps: 00:31:45.973 dumpcap: explicitly disabled via build config 00:31:45.973 graph: explicitly disabled via build config 00:31:45.973 pdump: explicitly disabled via build config 00:31:45.973 proc-info: explicitly disabled via build config 00:31:45.973 test-acl: explicitly disabled via build config 00:31:45.973 test-bbdev: explicitly disabled via build config 00:31:45.973 test-cmdline: explicitly disabled via build config 00:31:45.973 test-compress-perf: explicitly disabled via build config 00:31:45.973 test-crypto-perf: explicitly disabled via build config 00:31:45.973 test-dma-perf: explicitly disabled via build config 00:31:45.973 test-eventdev: explicitly disabled via build config 00:31:45.973 test-fib: explicitly disabled via build config 00:31:45.973 test-flow-perf: explicitly disabled via build config 00:31:45.973 test-gpudev: explicitly disabled via build config 00:31:45.973 test-mldev: explicitly disabled via build config 00:31:45.973 test-pipeline: explicitly disabled via build config 00:31:45.973 test-pmd: explicitly disabled via build config 00:31:45.973 test-regex: explicitly disabled via build config 00:31:45.973 test-sad: explicitly disabled via build config 00:31:45.973 test-security-perf: explicitly disabled via build config 00:31:45.973 00:31:45.973 libs: 00:31:45.973 metrics: explicitly disabled via build config 00:31:45.973 acl: explicitly disabled via build config 00:31:45.973 bbdev: explicitly disabled via build config 00:31:45.973 bitratestats: explicitly disabled via build config 00:31:45.973 bpf: explicitly disabled via build config 00:31:45.973 cfgfile: explicitly disabled via build config 00:31:45.973 distributor: explicitly disabled via build config 00:31:45.973 efd: explicitly disabled via build config 00:31:45.973 eventdev: explicitly disabled via build config 00:31:45.973 dispatcher: explicitly disabled via build config 00:31:45.973 gpudev: explicitly disabled via build config 00:31:45.973 gro: explicitly disabled via build config 00:31:45.973 gso: explicitly disabled via build config 00:31:45.973 ip_frag: explicitly disabled via build config 00:31:45.973 jobstats: explicitly disabled via build config 00:31:45.973 latencystats: explicitly disabled via build config 00:31:45.973 lpm: explicitly disabled via build config 00:31:45.973 member: explicitly disabled via build config 00:31:45.973 pcapng: explicitly disabled via build config 00:31:45.973 rawdev: explicitly disabled via build config 00:31:45.973 regexdev: explicitly disabled via build config 00:31:45.973 mldev: explicitly disabled via build config 00:31:45.973 rib: explicitly disabled via build config 00:31:45.973 sched: explicitly disabled via build config 00:31:45.973 stack: explicitly disabled via build config 00:31:45.973 ipsec: explicitly disabled via build config 00:31:45.973 pdcp: explicitly disabled via build config 00:31:45.973 fib: explicitly disabled via build config 00:31:45.973 port: explicitly disabled via build config 00:31:45.973 pdump: explicitly disabled via build config 00:31:45.973 table: explicitly disabled via build config 00:31:45.973 pipeline: explicitly disabled via build config 00:31:45.973 graph: explicitly disabled via build config 00:31:45.973 node: explicitly disabled via build config 00:31:45.973 00:31:45.973 drivers: 00:31:45.973 common/cpt: not in enabled drivers build config 00:31:45.973 common/dpaax: not in enabled drivers build config 00:31:45.973 common/iavf: not in enabled drivers build config 00:31:45.973 common/idpf: not in enabled drivers build config 00:31:45.973 common/mvep: not in enabled drivers build config 00:31:45.973 common/octeontx: not in enabled drivers build config 00:31:45.973 bus/auxiliary: not in enabled drivers build config 00:31:45.973 bus/cdx: not in enabled drivers build config 00:31:45.973 bus/dpaa: not in enabled drivers build config 00:31:45.973 bus/fslmc: not in enabled drivers build config 00:31:45.973 bus/ifpga: not in enabled drivers build config 00:31:45.974 bus/platform: not in enabled drivers build config 00:31:45.974 bus/vmbus: not in enabled drivers build config 00:31:45.974 common/cnxk: not in enabled drivers build config 00:31:45.974 common/mlx5: not in enabled drivers build config 00:31:45.974 common/nfp: not in enabled drivers build config 00:31:45.974 common/qat: not in enabled drivers build config 00:31:45.974 common/sfc_efx: not in enabled drivers build config 00:31:45.974 mempool/bucket: not in enabled drivers build config 00:31:45.974 mempool/cnxk: not in enabled drivers build config 00:31:45.974 mempool/dpaa: not in enabled drivers build config 00:31:45.974 mempool/dpaa2: not in enabled drivers build config 00:31:45.974 mempool/octeontx: not in enabled drivers build config 00:31:45.974 mempool/stack: not in enabled drivers build config 00:31:45.974 dma/cnxk: not in enabled drivers build config 00:31:45.974 dma/dpaa: not in enabled drivers build config 00:31:45.974 dma/dpaa2: not in enabled drivers build config 00:31:45.974 dma/hisilicon: not in enabled drivers build config 00:31:45.974 dma/idxd: not in enabled drivers build config 00:31:45.974 dma/ioat: not in enabled drivers build config 00:31:45.974 dma/skeleton: not in enabled drivers build config 00:31:45.974 net/af_packet: not in enabled drivers build config 00:31:45.974 net/af_xdp: not in enabled drivers build config 00:31:45.974 net/ark: not in enabled drivers build config 00:31:45.974 net/atlantic: not in enabled drivers build config 00:31:45.974 net/avp: not in enabled drivers build config 00:31:45.974 net/axgbe: not in enabled drivers build config 00:31:45.974 net/bnx2x: not in enabled drivers build config 00:31:45.974 net/bnxt: not in enabled drivers build config 00:31:45.974 net/bonding: not in enabled drivers build config 00:31:45.974 net/cnxk: not in enabled drivers build config 00:31:45.974 net/cpfl: not in enabled drivers build config 00:31:45.974 net/cxgbe: not in enabled drivers build config 00:31:45.974 net/dpaa: not in enabled drivers build config 00:31:45.974 net/dpaa2: not in enabled drivers build config 00:31:45.974 net/e1000: not in enabled drivers build config 00:31:45.974 net/ena: not in enabled drivers build config 00:31:45.974 net/enetc: not in enabled drivers build config 00:31:45.974 net/enetfec: not in enabled drivers build config 00:31:45.974 net/enic: not in enabled drivers build config 00:31:45.974 net/failsafe: not in enabled drivers build config 00:31:45.974 net/fm10k: not in enabled drivers build config 00:31:45.974 net/gve: not in enabled drivers build config 00:31:45.974 net/hinic: not in enabled drivers build config 00:31:45.974 net/hns3: not in enabled drivers build config 00:31:45.974 net/i40e: not in enabled drivers build config 00:31:45.974 net/iavf: not in enabled drivers build config 00:31:45.974 net/ice: not in enabled drivers build config 00:31:45.974 net/idpf: not in enabled drivers build config 00:31:45.974 net/igc: not in enabled drivers build config 00:31:45.974 net/ionic: not in enabled drivers build config 00:31:45.974 net/ipn3ke: not in enabled drivers build config 00:31:45.974 net/ixgbe: not in enabled drivers build config 00:31:45.974 net/mana: not in enabled drivers build config 00:31:45.974 net/memif: not in enabled drivers build config 00:31:45.974 net/mlx4: not in enabled drivers build config 00:31:45.974 net/mlx5: not in enabled drivers build config 00:31:45.974 net/mvneta: not in enabled drivers build config 00:31:45.974 net/mvpp2: not in enabled drivers build config 00:31:45.974 net/netvsc: not in enabled drivers build config 00:31:45.974 net/nfb: not in enabled drivers build config 00:31:45.974 net/nfp: not in enabled drivers build config 00:31:45.974 net/ngbe: not in enabled drivers build config 00:31:45.974 net/null: not in enabled drivers build config 00:31:45.974 net/octeontx: not in enabled drivers build config 00:31:45.974 net/octeon_ep: not in enabled drivers build config 00:31:45.974 net/pcap: not in enabled drivers build config 00:31:45.974 net/pfe: not in enabled drivers build config 00:31:45.974 net/qede: not in enabled drivers build config 00:31:45.974 net/ring: not in enabled drivers build config 00:31:45.974 net/sfc: not in enabled drivers build config 00:31:45.974 net/softnic: not in enabled drivers build config 00:31:45.974 net/tap: not in enabled drivers build config 00:31:45.974 net/thunderx: not in enabled drivers build config 00:31:45.974 net/txgbe: not in enabled drivers build config 00:31:45.974 net/vdev_netvsc: not in enabled drivers build config 00:31:45.974 net/vhost: not in enabled drivers build config 00:31:45.974 net/virtio: not in enabled drivers build config 00:31:45.974 net/vmxnet3: not in enabled drivers build config 00:31:45.974 raw/*: missing internal dependency, "rawdev" 00:31:45.974 crypto/armv8: not in enabled drivers build config 00:31:45.974 crypto/bcmfs: not in enabled drivers build config 00:31:45.974 crypto/caam_jr: not in enabled drivers build config 00:31:45.974 crypto/ccp: not in enabled drivers build config 00:31:45.974 crypto/cnxk: not in enabled drivers build config 00:31:45.974 crypto/dpaa_sec: not in enabled drivers build config 00:31:45.974 crypto/dpaa2_sec: not in enabled drivers build config 00:31:45.974 crypto/ipsec_mb: not in enabled drivers build config 00:31:45.974 crypto/mlx5: not in enabled drivers build config 00:31:45.974 crypto/mvsam: not in enabled drivers build config 00:31:45.974 crypto/nitrox: not in enabled drivers build config 00:31:45.974 crypto/null: not in enabled drivers build config 00:31:45.974 crypto/octeontx: not in enabled drivers build config 00:31:45.974 crypto/openssl: not in enabled drivers build config 00:31:45.974 crypto/scheduler: not in enabled drivers build config 00:31:45.974 crypto/uadk: not in enabled drivers build config 00:31:45.974 crypto/virtio: not in enabled drivers build config 00:31:45.974 compress/isal: not in enabled drivers build config 00:31:45.974 compress/mlx5: not in enabled drivers build config 00:31:45.974 compress/octeontx: not in enabled drivers build config 00:31:45.974 compress/zlib: not in enabled drivers build config 00:31:45.974 regex/*: missing internal dependency, "regexdev" 00:31:45.974 ml/*: missing internal dependency, "mldev" 00:31:45.974 vdpa/ifc: not in enabled drivers build config 00:31:45.974 vdpa/mlx5: not in enabled drivers build config 00:31:45.974 vdpa/nfp: not in enabled drivers build config 00:31:45.974 vdpa/sfc: not in enabled drivers build config 00:31:45.974 event/*: missing internal dependency, "eventdev" 00:31:45.974 baseband/*: missing internal dependency, "bbdev" 00:31:45.974 gpu/*: missing internal dependency, "gpudev" 00:31:45.974 00:31:45.974 00:31:46.233 Build targets in project: 85 00:31:46.233 00:31:46.233 DPDK 23.11.0 00:31:46.233 00:31:46.233 User defined options 00:31:46.233 default_library : static 00:31:46.233 libdir : lib 00:31:46.233 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:31:46.233 b_lto : true 00:31:46.233 b_sanitize : address 00:31:46.233 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:31:46.233 c_link_args : 00:31:46.233 cpu_instruction_set: native 00:31:46.233 disable_apps : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump 00:31:46.233 disable_libs : mldev,jobstats,bpf,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec 00:31:46.233 enable_docs : false 00:31:46.233 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:31:46.233 enable_kmods : false 00:31:46.233 tests : false 00:31:46.233 00:31:46.233 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:31:46.801 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:31:47.059 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:31:47.059 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:31:47.059 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:31:47.059 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:31:47.059 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:31:47.059 [6/265] Linking static target lib/librte_kvargs.a 00:31:47.059 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:31:47.059 [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:31:47.059 [9/265] Linking static target lib/librte_log.a 00:31:47.317 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:31:47.317 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:31:47.576 [12/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:31:47.576 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:31:47.576 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:31:47.576 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:31:47.835 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:31:47.835 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:31:48.094 [18/265] Linking target lib/librte_log.so.24.0 00:31:48.094 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:31:48.094 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:31:48.094 [21/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:31:48.353 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:31:48.353 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:31:48.353 [24/265] Linking target lib/librte_kvargs.so.24.0 00:31:48.353 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:31:48.353 [26/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:31:48.611 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:31:48.611 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:31:48.611 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:31:48.870 [30/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:31:48.870 [31/265] Linking static target lib/librte_telemetry.a 00:31:48.870 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:31:48.870 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:31:49.129 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:31:49.129 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:31:49.129 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:31:49.129 [37/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:31:49.129 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:31:49.129 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:31:49.129 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:31:49.387 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:31:49.387 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:31:49.647 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:31:49.647 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:31:49.906 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:31:49.906 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:31:50.165 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:31:50.165 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:31:50.165 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:31:50.165 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:31:50.423 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:31:50.423 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:31:50.423 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:31:50.682 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:31:50.682 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:31:50.682 [56/265] Linking target lib/librte_telemetry.so.24.0 00:31:50.682 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:31:50.682 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:31:50.941 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:31:50.941 [60/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:31:50.941 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:31:50.941 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:31:50.941 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:31:50.941 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:31:51.200 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:31:51.200 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:31:51.200 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:31:51.200 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:31:51.459 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:31:51.718 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:31:51.718 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:31:51.718 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:31:51.718 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:31:51.718 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:31:51.718 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:31:51.718 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:31:51.977 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:31:51.977 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:31:52.235 [79/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:31:52.235 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:31:52.494 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:31:52.494 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:31:52.494 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:31:52.494 [84/265] Linking static target lib/librte_ring.a 00:31:52.494 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:31:52.752 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:31:52.752 [87/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:31:53.011 [88/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:31:53.011 [89/265] Linking static target lib/librte_eal.a 00:31:53.011 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:31:53.011 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:31:53.270 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:31:53.270 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:31:53.270 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:31:53.270 [95/265] Linking static target lib/librte_mempool.a 00:31:53.529 [96/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:31:53.529 [97/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:31:53.787 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:31:53.787 [99/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:31:53.787 [100/265] Linking static target lib/librte_rcu.a 00:31:54.046 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:31:54.046 [102/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:31:54.046 [103/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:31:54.046 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:31:54.046 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:31:54.305 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:31:54.305 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:31:54.305 [108/265] Linking static target lib/librte_net.a 00:31:54.305 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:31:54.305 [110/265] Linking static target lib/librte_meter.a 00:31:54.563 [111/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:31:54.563 [112/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:31:54.822 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:31:54.822 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:31:54.822 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:31:55.080 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:31:55.339 [117/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:31:55.339 [118/265] Linking static target lib/librte_mbuf.a 00:31:55.339 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:31:55.913 [120/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:31:55.913 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:31:55.913 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:31:56.172 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:31:56.431 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:31:56.431 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:31:56.431 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:31:56.431 [127/265] Linking static target lib/librte_pci.a 00:31:56.431 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:31:56.689 [129/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:31:56.689 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:31:56.689 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:31:56.689 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:31:56.689 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:31:56.947 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:31:56.947 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:31:56.947 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:31:56.947 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:31:56.947 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:31:56.947 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:31:56.947 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:31:57.206 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:31:57.464 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:31:57.465 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:31:57.465 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:31:57.465 [145/265] Linking static target lib/librte_cmdline.a 00:31:57.723 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:31:57.982 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:31:58.240 [148/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:31:58.240 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:31:58.240 [150/265] Linking static target lib/librte_timer.a 00:31:58.240 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:31:58.499 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:31:58.499 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:31:58.499 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:31:58.499 [155/265] Linking static target lib/librte_compressdev.a 00:31:58.499 [156/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:31:58.757 [157/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:31:58.757 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:31:59.015 [159/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:31:59.015 [160/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:31:59.015 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:31:59.274 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:31:59.274 [163/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:31:59.274 [164/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:32:00.209 [165/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:32:00.209 [166/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:32:00.209 [167/265] Linking static target lib/librte_dmadev.a 00:32:00.209 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:32:00.209 [169/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:32:00.209 [170/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:32:00.777 [171/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:32:01.036 [172/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:32:01.036 [173/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:32:01.294 [174/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:32:01.553 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:32:01.553 [176/265] Linking static target lib/librte_power.a 00:32:01.553 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:32:01.811 [178/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:32:01.811 [179/265] Linking static target lib/librte_reorder.a 00:32:01.811 [180/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:32:01.811 [181/265] Linking static target lib/librte_ethdev.a 00:32:02.378 [182/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:32:02.378 [183/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:32:02.378 [184/265] Linking static target lib/librte_security.a 00:32:02.378 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:32:02.636 [186/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:32:02.895 [187/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:32:03.154 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:32:03.154 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:32:03.411 [190/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:32:03.411 [191/265] Linking static target lib/librte_hash.a 00:32:03.411 [192/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:32:03.411 [193/265] Linking static target lib/librte_cryptodev.a 00:32:03.669 [194/265] Linking target lib/librte_eal.so.24.0 00:32:03.928 [195/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:32:03.928 [196/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:32:04.186 [197/265] Linking target lib/librte_meter.so.24.0 00:32:04.186 [198/265] Linking target lib/librte_ring.so.24.0 00:32:04.186 [199/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:32:04.186 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:32:04.186 [201/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:32:04.445 [202/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:32:04.445 [203/265] Linking target lib/librte_pci.so.24.0 00:32:04.445 [204/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:32:04.704 [205/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:32:04.704 [206/265] Linking target lib/librte_timer.so.24.0 00:32:04.963 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:32:04.963 [208/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:32:04.963 [209/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:32:05.222 [210/265] Linking target lib/librte_rcu.so.24.0 00:32:05.480 [211/265] Linking target lib/librte_dmadev.so.24.0 00:32:05.480 [212/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:32:05.480 [213/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:32:05.739 [214/265] Linking target lib/librte_mempool.so.24.0 00:32:05.739 [215/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:32:05.739 [216/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:32:05.739 [217/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:32:05.998 [218/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:32:05.998 [219/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:32:05.998 [220/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:32:06.564 [221/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:32:06.823 [222/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:32:06.823 [223/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:32:06.823 [224/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:32:06.823 [225/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:32:07.082 [226/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:32:07.082 [227/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:32:07.082 [228/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:32:07.082 [229/265] Linking static target drivers/librte_bus_vdev.a 00:32:07.082 [230/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:32:07.082 [231/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:32:07.082 [232/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:32:07.082 [233/265] Linking static target drivers/librte_bus_pci.a 00:32:07.341 [234/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:32:07.342 [235/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:32:07.342 [236/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:32:07.342 [237/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:32:07.601 [238/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:32:07.601 [239/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:32:07.601 [240/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:32:07.601 [241/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:32:07.601 [242/265] Linking static target drivers/librte_mempool_ring.a 00:32:07.861 [243/265] Linking target drivers/librte_bus_vdev.so.24.0 00:32:08.120 [244/265] Linking target lib/librte_mbuf.so.24.0 00:32:08.379 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:32:08.379 [246/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:32:08.946 [247/265] Linking target lib/librte_reorder.so.24.0 00:32:08.947 [248/265] Linking target lib/librte_compressdev.so.24.0 00:32:09.515 [249/265] Linking target drivers/librte_bus_pci.so.24.0 00:32:09.515 [250/265] Linking target lib/librte_net.so.24.0 00:32:09.774 [251/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:32:11.152 [252/265] Linking target lib/librte_cmdline.so.24.0 00:32:11.411 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:32:11.411 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:32:11.978 [255/265] Linking target lib/librte_security.so.24.0 00:32:13.884 [256/265] Linking target lib/librte_ethdev.so.24.0 00:32:14.143 [257/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:32:15.520 [258/265] Linking target lib/librte_hash.so.24.0 00:32:15.521 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:32:16.463 [260/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:32:16.721 [261/265] Linking target lib/librte_power.so.24.0 00:32:43.269 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:32:43.269 [263/265] Linking static target lib/librte_vhost.a 00:32:43.269 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:32:55.525 [265/265] Linking target lib/librte_vhost.so.24.0 00:32:55.525 INFO: autodetecting backend as ninja 00:32:55.525 INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:32:55.525 CC lib/log/log_flags.o 00:32:55.525 CC lib/log/log.o 00:32:55.525 CC lib/log/log_deprecated.o 00:32:55.525 CC lib/ut/ut.o 00:32:55.525 CC lib/ut_mock/mock.o 00:32:55.525 LIB libspdk_ut_mock.a 00:32:55.525 LIB libspdk_ut.a 00:32:55.525 LIB libspdk_log.a 00:32:55.525 CC lib/util/base64.o 00:32:55.525 CXX lib/trace_parser/trace.o 00:32:55.525 CC lib/ioat/ioat.o 00:32:55.525 CC lib/util/bit_array.o 00:32:55.525 CC lib/util/cpuset.o 00:32:55.525 CC lib/util/crc16.o 00:32:55.525 CC lib/util/crc32c.o 00:32:55.525 CC lib/util/crc32.o 00:32:55.525 CC lib/dma/dma.o 00:32:55.783 CC lib/vfio_user/host/vfio_user_pci.o 00:32:55.783 CC lib/util/crc32_ieee.o 00:32:55.783 CC lib/vfio_user/host/vfio_user.o 00:32:55.783 CC lib/util/crc64.o 00:32:55.783 CC lib/util/dif.o 00:32:55.783 LIB libspdk_dma.a 00:32:55.783 CC lib/util/fd.o 00:32:55.783 CC lib/util/file.o 00:32:55.783 CC lib/util/hexlify.o 00:32:55.783 LIB libspdk_ioat.a 00:32:55.783 CC lib/util/iov.o 00:32:55.783 CC lib/util/math.o 00:32:55.783 CC lib/util/pipe.o 00:32:56.042 CC lib/util/strerror_tls.o 00:32:56.042 CC lib/util/string.o 00:32:56.042 LIB libspdk_vfio_user.a 00:32:56.042 CC lib/util/uuid.o 00:32:56.042 CC lib/util/fd_group.o 00:32:56.042 CC lib/util/xor.o 00:32:56.042 CC lib/util/zipf.o 00:32:56.301 LIB libspdk_util.a 00:32:56.301 CC lib/json/json_parse.o 00:32:56.301 CC lib/json/json_util.o 00:32:56.301 CC lib/json/json_write.o 00:32:56.301 CC lib/env_dpdk/env.o 00:32:56.301 CC lib/env_dpdk/memory.o 00:32:56.301 CC lib/vmd/vmd.o 00:32:56.301 CC lib/conf/conf.o 00:32:56.301 CC lib/idxd/idxd.o 00:32:56.301 CC lib/rdma/common.o 00:32:56.301 LIB libspdk_trace_parser.a 00:32:56.301 CC lib/rdma/rdma_verbs.o 00:32:56.559 CC lib/env_dpdk/pci.o 00:32:56.559 LIB libspdk_conf.a 00:32:56.559 CC lib/idxd/idxd_user.o 00:32:56.559 CC lib/idxd/idxd_kernel.o 00:32:56.559 LIB libspdk_json.a 00:32:56.559 CC lib/env_dpdk/init.o 00:32:56.559 CC lib/vmd/led.o 00:32:56.559 LIB libspdk_rdma.a 00:32:56.559 CC lib/env_dpdk/threads.o 00:32:56.818 CC lib/env_dpdk/pci_ioat.o 00:32:56.818 CC lib/env_dpdk/pci_virtio.o 00:32:56.818 LIB libspdk_idxd.a 00:32:56.818 CC lib/env_dpdk/pci_vmd.o 00:32:56.818 LIB libspdk_vmd.a 00:32:56.818 CC lib/jsonrpc/jsonrpc_server.o 00:32:56.818 CC lib/env_dpdk/pci_idxd.o 00:32:56.818 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:32:56.818 CC lib/env_dpdk/pci_event.o 00:32:56.818 CC lib/env_dpdk/sigbus_handler.o 00:32:56.818 CC lib/env_dpdk/pci_dpdk.o 00:32:56.818 CC lib/env_dpdk/pci_dpdk_2207.o 00:32:56.818 CC lib/env_dpdk/pci_dpdk_2211.o 00:32:56.818 CC lib/jsonrpc/jsonrpc_client.o 00:32:57.077 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:32:57.077 LIB libspdk_jsonrpc.a 00:32:57.337 CC lib/rpc/rpc.o 00:32:57.337 LIB libspdk_rpc.a 00:32:57.597 CC lib/trace/trace.o 00:32:57.597 CC lib/trace/trace_rpc.o 00:32:57.597 CC lib/trace/trace_flags.o 00:32:57.597 CC lib/notify/notify.o 00:32:57.597 CC lib/notify/notify_rpc.o 00:32:57.597 CC lib/sock/sock_rpc.o 00:32:57.597 CC lib/sock/sock.o 00:32:57.597 LIB libspdk_notify.a 00:32:57.856 LIB libspdk_trace.a 00:32:57.856 LIB libspdk_env_dpdk.a 00:32:57.856 LIB libspdk_sock.a 00:32:57.856 CC lib/thread/thread.o 00:32:57.856 CC lib/thread/iobuf.o 00:32:58.115 CC lib/nvme/nvme_ctrlr_cmd.o 00:32:58.115 CC lib/nvme/nvme_ctrlr.o 00:32:58.115 CC lib/nvme/nvme_fabric.o 00:32:58.115 CC lib/nvme/nvme_ns.o 00:32:58.115 CC lib/nvme/nvme_ns_cmd.o 00:32:58.115 CC lib/nvme/nvme_qpair.o 00:32:58.115 CC lib/nvme/nvme_pcie.o 00:32:58.115 CC lib/nvme/nvme_pcie_common.o 00:32:58.115 CC lib/nvme/nvme.o 00:32:58.683 LIB libspdk_thread.a 00:32:58.683 CC lib/nvme/nvme_quirks.o 00:32:58.683 CC lib/nvme/nvme_transport.o 00:32:58.942 CC lib/nvme/nvme_discovery.o 00:32:58.942 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:32:58.942 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:32:58.942 CC lib/nvme/nvme_tcp.o 00:32:58.942 CC lib/nvme/nvme_opal.o 00:32:58.942 CC lib/nvme/nvme_io_msg.o 00:32:59.201 CC lib/nvme/nvme_poll_group.o 00:32:59.460 CC lib/nvme/nvme_zns.o 00:32:59.460 CC lib/nvme/nvme_cuse.o 00:32:59.460 CC lib/nvme/nvme_vfio_user.o 00:32:59.718 CC lib/nvme/nvme_rdma.o 00:32:59.718 CC lib/accel/accel.o 00:32:59.718 CC lib/blob/blobstore.o 00:32:59.718 CC lib/init/json_config.o 00:32:59.718 CC lib/init/subsystem.o 00:32:59.977 CC lib/init/subsystem_rpc.o 00:32:59.977 CC lib/accel/accel_rpc.o 00:32:59.977 CC lib/init/rpc.o 00:32:59.977 CC lib/accel/accel_sw.o 00:33:00.236 CC lib/blob/request.o 00:33:00.236 LIB libspdk_init.a 00:33:00.236 CC lib/blob/zeroes.o 00:33:00.236 CC lib/blob/blob_bs_dev.o 00:33:00.236 CC lib/virtio/virtio.o 00:33:00.236 CC lib/virtio/virtio_vhost_user.o 00:33:00.236 CC lib/virtio/virtio_vfio_user.o 00:33:00.236 LIB libspdk_accel.a 00:33:00.236 CC lib/virtio/virtio_pci.o 00:33:00.495 CC lib/event/app.o 00:33:00.495 CC lib/bdev/bdev.o 00:33:00.495 CC lib/bdev/bdev_rpc.o 00:33:00.495 CC lib/event/reactor.o 00:33:00.495 CC lib/event/log_rpc.o 00:33:00.495 CC lib/event/app_rpc.o 00:33:00.495 CC lib/event/scheduler_static.o 00:33:00.495 LIB libspdk_virtio.a 00:33:00.496 CC lib/bdev/bdev_zone.o 00:33:00.755 CC lib/bdev/part.o 00:33:00.755 CC lib/bdev/scsi_nvme.o 00:33:00.755 LIB libspdk_event.a 00:33:00.755 LIB libspdk_nvme.a 00:33:01.324 LIB libspdk_blob.a 00:33:01.583 CC lib/lvol/lvol.o 00:33:01.583 CC lib/blobfs/blobfs.o 00:33:01.583 CC lib/blobfs/tree.o 00:33:01.842 LIB libspdk_bdev.a 00:33:01.842 LIB libspdk_blobfs.a 00:33:02.102 CC lib/ublk/ublk.o 00:33:02.102 CC lib/nbd/nbd.o 00:33:02.102 CC lib/ublk/ublk_rpc.o 00:33:02.102 CC lib/nbd/nbd_rpc.o 00:33:02.102 CC lib/scsi/dev.o 00:33:02.102 CC lib/scsi/lun.o 00:33:02.102 CC lib/scsi/port.o 00:33:02.102 CC lib/ftl/ftl_core.o 00:33:02.102 CC lib/nvmf/ctrlr.o 00:33:02.102 LIB libspdk_lvol.a 00:33:02.102 CC lib/nvmf/ctrlr_discovery.o 00:33:02.102 CC lib/nvmf/ctrlr_bdev.o 00:33:02.102 CC lib/nvmf/subsystem.o 00:33:02.102 CC lib/nvmf/nvmf.o 00:33:02.361 CC lib/nvmf/nvmf_rpc.o 00:33:02.361 CC lib/scsi/scsi.o 00:33:02.361 CC lib/nvmf/transport.o 00:33:02.361 CC lib/ftl/ftl_init.o 00:33:02.361 LIB libspdk_nbd.a 00:33:02.361 CC lib/ftl/ftl_layout.o 00:33:02.361 CC lib/scsi/scsi_bdev.o 00:33:02.361 LIB libspdk_ublk.a 00:33:02.361 CC lib/scsi/scsi_pr.o 00:33:02.620 CC lib/scsi/scsi_rpc.o 00:33:02.620 CC lib/scsi/task.o 00:33:02.620 CC lib/nvmf/tcp.o 00:33:02.620 CC lib/nvmf/rdma.o 00:33:02.620 CC lib/ftl/ftl_debug.o 00:33:02.620 CC lib/ftl/ftl_io.o 00:33:02.620 CC lib/ftl/ftl_sb.o 00:33:02.620 CC lib/ftl/ftl_l2p.o 00:33:02.620 CC lib/ftl/ftl_l2p_flat.o 00:33:02.879 LIB libspdk_scsi.a 00:33:02.879 CC lib/ftl/ftl_nv_cache.o 00:33:02.879 CC lib/ftl/ftl_band.o 00:33:02.879 CC lib/ftl/ftl_band_ops.o 00:33:02.879 CC lib/ftl/ftl_writer.o 00:33:02.879 CC lib/ftl/ftl_rq.o 00:33:02.879 CC lib/iscsi/conn.o 00:33:02.879 CC lib/ftl/ftl_reloc.o 00:33:02.879 CC lib/ftl/ftl_l2p_cache.o 00:33:03.138 CC lib/ftl/ftl_p2l.o 00:33:03.138 CC lib/ftl/mngt/ftl_mngt.o 00:33:03.138 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:33:03.138 CC lib/vhost/vhost.o 00:33:03.138 CC lib/vhost/vhost_rpc.o 00:33:03.402 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:33:03.402 CC lib/iscsi/init_grp.o 00:33:03.402 CC lib/iscsi/iscsi.o 00:33:03.403 CC lib/iscsi/md5.o 00:33:03.403 CC lib/iscsi/param.o 00:33:03.403 CC lib/ftl/mngt/ftl_mngt_startup.o 00:33:03.403 CC lib/ftl/mngt/ftl_mngt_md.o 00:33:03.403 CC lib/iscsi/portal_grp.o 00:33:03.662 CC lib/iscsi/tgt_node.o 00:33:03.662 CC lib/ftl/mngt/ftl_mngt_misc.o 00:33:03.662 CC lib/vhost/vhost_scsi.o 00:33:03.662 CC lib/iscsi/iscsi_subsystem.o 00:33:03.662 LIB libspdk_nvmf.a 00:33:03.662 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:33:03.662 CC lib/iscsi/iscsi_rpc.o 00:33:03.662 CC lib/iscsi/task.o 00:33:03.921 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:33:03.921 CC lib/ftl/mngt/ftl_mngt_band.o 00:33:03.921 CC lib/vhost/vhost_blk.o 00:33:03.921 CC lib/vhost/rte_vhost_user.o 00:33:03.921 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:33:03.921 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:33:03.921 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:33:03.921 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:33:04.181 CC lib/ftl/utils/ftl_conf.o 00:33:04.181 CC lib/ftl/utils/ftl_md.o 00:33:04.181 CC lib/ftl/utils/ftl_mempool.o 00:33:04.181 CC lib/ftl/utils/ftl_bitmap.o 00:33:04.181 LIB libspdk_iscsi.a 00:33:04.181 CC lib/ftl/utils/ftl_property.o 00:33:04.181 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:33:04.181 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:33:04.439 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:33:04.439 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:33:04.439 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:33:04.439 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:33:04.439 CC lib/ftl/upgrade/ftl_sb_v3.o 00:33:04.439 CC lib/ftl/upgrade/ftl_sb_v5.o 00:33:04.439 CC lib/ftl/nvc/ftl_nvc_dev.o 00:33:04.439 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:33:04.439 CC lib/ftl/base/ftl_base_dev.o 00:33:04.439 CC lib/ftl/base/ftl_base_bdev.o 00:33:04.698 LIB libspdk_ftl.a 00:33:04.957 LIB libspdk_vhost.a 00:33:05.217 CC module/env_dpdk/env_dpdk_rpc.o 00:33:05.217 CC module/accel/ioat/accel_ioat.o 00:33:05.217 CC module/scheduler/gscheduler/gscheduler.o 00:33:05.217 CC module/accel/dsa/accel_dsa.o 00:33:05.217 CC module/blob/bdev/blob_bdev.o 00:33:05.217 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:33:05.217 CC module/accel/error/accel_error.o 00:33:05.217 CC module/scheduler/dynamic/scheduler_dynamic.o 00:33:05.217 CC module/accel/iaa/accel_iaa.o 00:33:05.217 CC module/sock/posix/posix.o 00:33:05.217 LIB libspdk_env_dpdk_rpc.a 00:33:05.217 CC module/accel/dsa/accel_dsa_rpc.o 00:33:05.476 LIB libspdk_scheduler_gscheduler.a 00:33:05.476 LIB libspdk_scheduler_dpdk_governor.a 00:33:05.476 CC module/accel/ioat/accel_ioat_rpc.o 00:33:05.476 CC module/accel/iaa/accel_iaa_rpc.o 00:33:05.476 CC module/accel/error/accel_error_rpc.o 00:33:05.476 LIB libspdk_scheduler_dynamic.a 00:33:05.476 LIB libspdk_blob_bdev.a 00:33:05.476 LIB libspdk_accel_dsa.a 00:33:05.476 LIB libspdk_accel_ioat.a 00:33:05.476 LIB libspdk_accel_iaa.a 00:33:05.476 LIB libspdk_accel_error.a 00:33:05.476 CC module/bdev/lvol/vbdev_lvol.o 00:33:05.476 CC module/bdev/malloc/bdev_malloc.o 00:33:05.476 CC module/bdev/gpt/gpt.o 00:33:05.476 CC module/bdev/delay/vbdev_delay.o 00:33:05.476 CC module/bdev/error/vbdev_error.o 00:33:05.476 CC module/blobfs/bdev/blobfs_bdev.o 00:33:05.735 CC module/bdev/null/bdev_null.o 00:33:05.735 CC module/bdev/passthru/vbdev_passthru.o 00:33:05.735 CC module/bdev/nvme/bdev_nvme.o 00:33:05.735 LIB libspdk_sock_posix.a 00:33:05.735 CC module/bdev/gpt/vbdev_gpt.o 00:33:05.735 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:33:05.735 CC module/bdev/null/bdev_null_rpc.o 00:33:05.735 CC module/bdev/error/vbdev_error_rpc.o 00:33:05.735 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:33:05.993 CC module/bdev/malloc/bdev_malloc_rpc.o 00:33:05.993 CC module/bdev/nvme/bdev_nvme_rpc.o 00:33:05.993 CC module/bdev/delay/vbdev_delay_rpc.o 00:33:05.993 LIB libspdk_blobfs_bdev.a 00:33:05.993 LIB libspdk_bdev_null.a 00:33:05.993 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:33:05.993 CC module/bdev/nvme/nvme_rpc.o 00:33:05.993 LIB libspdk_bdev_error.a 00:33:05.993 LIB libspdk_bdev_gpt.a 00:33:05.994 LIB libspdk_bdev_passthru.a 00:33:05.994 LIB libspdk_bdev_malloc.a 00:33:05.994 CC module/bdev/raid/bdev_raid.o 00:33:05.994 LIB libspdk_bdev_delay.a 00:33:05.994 CC module/bdev/split/vbdev_split.o 00:33:05.994 CC module/bdev/zone_block/vbdev_zone_block.o 00:33:06.251 CC module/bdev/aio/bdev_aio.o 00:33:06.251 CC module/bdev/ftl/bdev_ftl.o 00:33:06.251 CC module/bdev/iscsi/bdev_iscsi.o 00:33:06.251 LIB libspdk_bdev_lvol.a 00:33:06.251 CC module/bdev/virtio/bdev_virtio_scsi.o 00:33:06.251 CC module/bdev/virtio/bdev_virtio_blk.o 00:33:06.251 CC module/bdev/split/vbdev_split_rpc.o 00:33:06.251 CC module/bdev/virtio/bdev_virtio_rpc.o 00:33:06.510 CC module/bdev/ftl/bdev_ftl_rpc.o 00:33:06.510 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:33:06.510 CC module/bdev/aio/bdev_aio_rpc.o 00:33:06.510 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:33:06.510 LIB libspdk_bdev_split.a 00:33:06.510 CC module/bdev/raid/bdev_raid_rpc.o 00:33:06.510 CC module/bdev/nvme/bdev_mdns_client.o 00:33:06.510 CC module/bdev/raid/bdev_raid_sb.o 00:33:06.510 LIB libspdk_bdev_zone_block.a 00:33:06.510 LIB libspdk_bdev_aio.a 00:33:06.510 CC module/bdev/raid/raid0.o 00:33:06.510 CC module/bdev/raid/raid1.o 00:33:06.510 LIB libspdk_bdev_ftl.a 00:33:06.510 CC module/bdev/raid/concat.o 00:33:06.510 LIB libspdk_bdev_iscsi.a 00:33:06.510 CC module/bdev/nvme/vbdev_opal.o 00:33:06.769 CC module/bdev/nvme/vbdev_opal_rpc.o 00:33:06.769 LIB libspdk_bdev_virtio.a 00:33:06.769 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:33:06.769 CC module/bdev/raid/raid5f.o 00:33:07.028 LIB libspdk_bdev_nvme.a 00:33:07.028 LIB libspdk_bdev_raid.a 00:33:07.287 CC module/event/subsystems/scheduler/scheduler.o 00:33:07.287 CC module/event/subsystems/sock/sock.o 00:33:07.287 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:33:07.287 CC module/event/subsystems/vmd/vmd.o 00:33:07.287 CC module/event/subsystems/vmd/vmd_rpc.o 00:33:07.287 CC module/event/subsystems/iobuf/iobuf.o 00:33:07.287 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:33:07.546 LIB libspdk_event_sock.a 00:33:07.546 LIB libspdk_event_vhost_blk.a 00:33:07.546 LIB libspdk_event_vmd.a 00:33:07.546 LIB libspdk_event_scheduler.a 00:33:07.546 LIB libspdk_event_iobuf.a 00:33:07.546 CC module/event/subsystems/accel/accel.o 00:33:07.805 LIB libspdk_event_accel.a 00:33:07.805 CC module/event/subsystems/bdev/bdev.o 00:33:08.064 LIB libspdk_event_bdev.a 00:33:08.064 CC module/event/subsystems/scsi/scsi.o 00:33:08.064 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:33:08.064 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:33:08.064 CC module/event/subsystems/ublk/ublk.o 00:33:08.064 CC module/event/subsystems/nbd/nbd.o 00:33:08.324 LIB libspdk_event_ublk.a 00:33:08.324 LIB libspdk_event_scsi.a 00:33:08.324 LIB libspdk_event_nbd.a 00:33:08.324 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:33:08.324 LIB libspdk_event_nvmf.a 00:33:08.324 CC module/event/subsystems/iscsi/iscsi.o 00:33:08.583 LIB libspdk_event_iscsi.a 00:33:08.583 LIB libspdk_event_vhost_scsi.a 00:33:08.583 CC app/trace_record/trace_record.o 00:33:08.583 CXX app/trace/trace.o 00:33:08.583 CC examples/ioat/perf/perf.o 00:33:08.583 CC examples/accel/perf/accel_perf.o 00:33:08.583 CC examples/sock/hello_world/hello_sock.o 00:33:08.583 CC examples/nvme/hello_world/hello_world.o 00:33:08.583 CC app/nvmf_tgt/nvmf_main.o 00:33:08.843 CC examples/bdev/hello_world/hello_bdev.o 00:33:08.843 CC examples/blob/hello_world/hello_blob.o 00:33:08.843 CC test/accel/dif/dif.o 00:33:08.843 LINK spdk_trace_record 00:33:08.843 LINK nvmf_tgt 00:33:08.843 LINK ioat_perf 00:33:08.843 LINK hello_sock 00:33:08.843 LINK hello_world 00:33:09.102 LINK hello_blob 00:33:09.102 LINK hello_bdev 00:33:09.102 LINK dif 00:33:09.102 LINK accel_perf 00:33:09.102 LINK spdk_trace 00:33:24.011 CC app/iscsi_tgt/iscsi_tgt.o 00:33:24.270 LINK iscsi_tgt 00:33:32.410 CC examples/blob/cli/blobcli.o 00:33:34.942 LINK blobcli 00:33:35.509 CC examples/ioat/verify/verify.o 00:33:36.904 LINK verify 00:33:54.990 CC examples/vmd/lsvmd/lsvmd.o 00:33:54.990 LINK lsvmd 00:34:03.105 CC examples/nvme/reconnect/reconnect.o 00:34:05.670 LINK reconnect 00:34:07.571 CC examples/nvme/nvme_manage/nvme_manage.o 00:34:10.858 LINK nvme_manage 00:34:16.128 CC examples/nvmf/nvmf/nvmf.o 00:34:18.659 LINK nvmf 00:35:14.888 CC examples/vmd/led/led.o 00:35:14.888 LINK led 00:35:27.136 CC examples/util/zipf/zipf.o 00:35:28.513 LINK zipf 00:35:36.630 CC examples/nvme/arbitration/arbitration.o 00:35:38.532 LINK arbitration 00:36:10.622 CC examples/bdev/bdevperf/bdevperf.o 00:36:10.622 CC examples/thread/thread/thread_ex.o 00:36:10.622 LINK thread 00:36:10.622 CC test/app/bdev_svc/bdev_svc.o 00:36:10.882 LINK bdev_svc 00:36:10.882 LINK bdevperf 00:36:14.169 CC test/bdev/bdevio/bdevio.o 00:36:15.545 LINK bdevio 00:36:20.816 CC examples/idxd/perf/perf.o 00:36:21.751 LINK idxd_perf 00:36:29.867 CC examples/interrupt_tgt/interrupt_tgt.o 00:36:30.803 LINK interrupt_tgt 00:36:38.920 CC app/spdk_tgt/spdk_tgt.o 00:36:39.487 LINK spdk_tgt 00:36:41.450 CC examples/nvme/hotplug/hotplug.o 00:36:43.355 LINK hotplug 00:36:45.257 CC app/spdk_lspci/spdk_lspci.o 00:36:45.827 LINK spdk_lspci 00:37:00.713 CC test/blobfs/mkfs/mkfs.o 00:37:01.281 LINK mkfs 00:37:57.511 CC examples/nvme/cmb_copy/cmb_copy.o 00:37:57.511 LINK cmb_copy 00:38:02.782 CC examples/nvme/abort/abort.o 00:38:04.159 CC app/spdk_nvme_perf/perf.o 00:38:05.096 LINK abort 00:38:10.371 LINK spdk_nvme_perf 00:38:42.453 CC app/spdk_nvme_identify/identify.o 00:38:46.695 LINK spdk_nvme_identify 00:39:13.249 CC app/spdk_nvme_discover/discovery_aer.o 00:39:13.249 LINK spdk_nvme_discover 00:39:13.249 CC app/spdk_top/spdk_top.o 00:39:15.784 LINK spdk_top 00:39:16.721 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:39:18.629 LINK nvme_fuzz 00:39:18.629 CC app/vhost/vhost.o 00:39:19.568 TEST_HEADER include/spdk/config.h 00:39:19.568 CXX test/cpp_headers/accel.o 00:39:19.568 LINK vhost 00:39:20.507 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:39:20.766 CXX test/cpp_headers/accel_module.o 00:39:21.704 LINK pmr_persistence 00:39:21.963 CXX test/cpp_headers/assert.o 00:39:22.900 CXX test/cpp_headers/barrier.o 00:39:24.278 CC test/app/histogram_perf/histogram_perf.o 00:39:24.278 CXX test/cpp_headers/base64.o 00:39:25.214 LINK histogram_perf 00:39:25.472 CXX test/cpp_headers/bdev.o 00:39:26.850 CXX test/cpp_headers/bdev_module.o 00:39:28.228 CXX test/cpp_headers/bdev_zone.o 00:39:30.134 CXX test/cpp_headers/bit_array.o 00:39:32.052 CXX test/cpp_headers/bit_pool.o 00:39:33.430 CXX test/cpp_headers/blob.o 00:39:34.809 CXX test/cpp_headers/blob_bdev.o 00:39:36.716 CXX test/cpp_headers/blobfs.o 00:39:38.622 CXX test/cpp_headers/blobfs_bdev.o 00:39:40.527 CXX test/cpp_headers/conf.o 00:39:41.572 CXX test/cpp_headers/config.o 00:39:41.831 CXX test/cpp_headers/cpuset.o 00:39:43.210 CXX test/cpp_headers/crc16.o 00:39:45.115 CXX test/cpp_headers/crc32.o 00:39:45.374 CC app/spdk_dd/spdk_dd.o 00:39:46.307 CXX test/cpp_headers/crc64.o 00:39:47.682 LINK spdk_dd 00:39:47.941 CXX test/cpp_headers/dif.o 00:39:49.318 CXX test/cpp_headers/dma.o 00:39:51.224 CXX test/cpp_headers/endian.o 00:39:52.602 CXX test/cpp_headers/env.o 00:39:53.170 CXX test/cpp_headers/env_dpdk.o 00:39:54.107 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:39:54.675 CXX test/cpp_headers/event.o 00:39:56.052 CXX test/cpp_headers/fd.o 00:39:57.429 CXX test/cpp_headers/fd_group.o 00:39:58.808 CXX test/cpp_headers/file.o 00:40:00.185 CXX test/cpp_headers/ftl.o 00:40:01.122 LINK iscsi_fuzz 00:40:01.381 CXX test/cpp_headers/gpt_spec.o 00:40:01.949 CXX test/cpp_headers/hexlify.o 00:40:02.888 CC app/fio/nvme/fio_plugin.o 00:40:02.888 CXX test/cpp_headers/histogram_data.o 00:40:04.266 CXX test/cpp_headers/idxd.o 00:40:05.202 LINK spdk_nvme 00:40:05.202 CXX test/cpp_headers/idxd_spec.o 00:40:05.202 CXX test/cpp_headers/init.o 00:40:06.579 CXX test/cpp_headers/ioat.o 00:40:06.579 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:40:07.147 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:40:07.406 CXX test/cpp_headers/ioat_spec.o 00:40:07.665 CC test/dma/test_dma/test_dma.o 00:40:08.602 CXX test/cpp_headers/iscsi_spec.o 00:40:09.170 LINK vhost_fuzz 00:40:10.109 CXX test/cpp_headers/json.o 00:40:10.109 LINK test_dma 00:40:11.488 CXX test/cpp_headers/jsonrpc.o 00:40:12.866 CXX test/cpp_headers/likely.o 00:40:13.823 CXX test/cpp_headers/log.o 00:40:15.232 CXX test/cpp_headers/lvol.o 00:40:16.168 CXX test/cpp_headers/memory.o 00:40:17.546 CXX test/cpp_headers/mmio.o 00:40:18.924 CXX test/cpp_headers/nbd.o 00:40:19.182 CXX test/cpp_headers/notify.o 00:40:20.559 CXX test/cpp_headers/nvme.o 00:40:20.559 CC test/app/jsoncat/jsoncat.o 00:40:21.936 LINK jsoncat 00:40:21.936 CXX test/cpp_headers/nvme_intel.o 00:40:22.504 CXX test/cpp_headers/nvme_ocssd.o 00:40:23.882 CC app/fio/bdev/fio_plugin.o 00:40:24.142 CXX test/cpp_headers/nvme_ocssd_spec.o 00:40:25.520 CXX test/cpp_headers/nvme_spec.o 00:40:26.456 LINK spdk_bdev 00:40:27.025 CXX test/cpp_headers/nvme_zns.o 00:40:28.403 CXX test/cpp_headers/nvmf.o 00:40:30.308 CXX test/cpp_headers/nvmf_cmd.o 00:40:32.212 CXX test/cpp_headers/nvmf_fc_spec.o 00:40:34.117 CXX test/cpp_headers/nvmf_spec.o 00:40:35.494 CXX test/cpp_headers/nvmf_transport.o 00:40:35.753 CC test/env/mem_callbacks/mem_callbacks.o 00:40:37.132 CXX test/cpp_headers/opal.o 00:40:39.037 CXX test/cpp_headers/opal_spec.o 00:40:40.940 CXX test/cpp_headers/pci_ids.o 00:40:41.505 LINK mem_callbacks 00:40:42.073 CXX test/cpp_headers/pipe.o 00:40:43.976 CXX test/cpp_headers/queue.o 00:40:43.976 CXX test/cpp_headers/reduce.o 00:40:45.352 CXX test/cpp_headers/rpc.o 00:40:46.727 CXX test/cpp_headers/scheduler.o 00:40:48.114 CXX test/cpp_headers/scsi.o 00:40:50.084 CXX test/cpp_headers/scsi_spec.o 00:40:51.460 CXX test/cpp_headers/sock.o 00:40:53.364 CXX test/cpp_headers/stdinc.o 00:40:54.301 CXX test/cpp_headers/string.o 00:40:55.679 CXX test/cpp_headers/thread.o 00:40:57.054 CXX test/cpp_headers/trace.o 00:40:57.990 CC test/app/stub/stub.o 00:40:58.557 CXX test/cpp_headers/trace_parser.o 00:40:59.124 LINK stub 00:40:59.691 CXX test/cpp_headers/tree.o 00:40:59.950 CXX test/cpp_headers/ublk.o 00:41:01.324 CXX test/cpp_headers/util.o 00:41:01.891 CC test/event/event_perf/event_perf.o 00:41:02.151 CC test/lvol/esnap/esnap.o 00:41:02.410 CXX test/cpp_headers/uuid.o 00:41:02.669 LINK event_perf 00:41:03.607 CXX test/cpp_headers/version.o 00:41:03.867 CXX test/cpp_headers/vfio_user_pci.o 00:41:05.245 CXX test/cpp_headers/vfio_user_spec.o 00:41:06.624 CXX test/cpp_headers/vhost.o 00:41:08.001 CXX test/cpp_headers/vmd.o 00:41:09.381 CXX test/cpp_headers/xor.o 00:41:10.760 CXX test/cpp_headers/zipf.o 00:41:13.296 CC test/nvme/aer/aer.o 00:41:15.203 LINK aer 00:41:20.475 CC test/env/vtophys/vtophys.o 00:41:21.042 LINK vtophys 00:41:25.227 LINK esnap 00:42:04.008 CC test/event/reactor/reactor.o 00:42:04.008 LINK reactor 00:42:16.218 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:42:16.218 CC test/rpc_client/rpc_client_test.o 00:42:16.218 LINK env_dpdk_post_init 00:42:16.218 LINK rpc_client_test 00:42:20.410 CC test/env/memory/memory_ut.o 00:42:20.410 CC test/thread/poller_perf/poller_perf.o 00:42:20.979 LINK poller_perf 00:42:25.172 CC test/nvme/reset/reset.o 00:42:26.109 LINK reset 00:42:26.109 LINK memory_ut 00:42:31.382 CC test/thread/lock/spdk_lock.o 00:42:35.574 CC test/env/pci/pci_ut.o 00:42:36.959 LINK spdk_lock 00:42:37.527 LINK pci_ut 00:42:49.747 CC test/event/reactor_perf/reactor_perf.o 00:42:49.747 LINK reactor_perf 00:42:53.034 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:42:54.425 LINK histogram_ut 00:42:55.866 CC test/event/app_repeat/app_repeat.o 00:42:57.243 LINK app_repeat 00:42:59.148 CC test/unit/lib/accel/accel.c/accel_ut.o 00:43:04.422 CC test/event/scheduler/scheduler.o 00:43:04.991 CC test/nvme/sgl/sgl.o 00:43:05.251 LINK scheduler 00:43:06.188 LINK accel_ut 00:43:06.447 LINK sgl 00:43:09.735 CC test/nvme/e2edp/nvme_dp.o 00:43:10.673 LINK nvme_dp 00:43:20.652 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:43:21.590 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:43:23.496 CC test/nvme/overhead/overhead.o 00:43:24.063 LINK blob_bdev_ut 00:43:25.000 LINK overhead 00:43:29.192 CC test/nvme/err_injection/err_injection.o 00:43:29.761 LINK err_injection 00:43:33.950 CC test/unit/lib/blob/blob.c/blob_ut.o 00:43:40.525 LINK bdev_ut 00:43:47.158 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:43:48.094 LINK tree_ut 00:43:54.662 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:43:59.934 LINK blobfs_async_ut 00:44:01.312 CC test/unit/lib/dma/dma.c/dma_ut.o 00:44:02.692 LINK blob_ut 00:44:02.951 CC test/nvme/startup/startup.o 00:44:03.211 LINK dma_ut 00:44:04.148 LINK startup 00:44:08.339 CC test/unit/lib/bdev/part.c/part_ut.o 00:44:10.246 CC test/unit/lib/event/app.c/app_ut.o 00:44:12.782 LINK app_ut 00:44:18.056 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:44:18.056 LINK part_ut 00:44:18.625 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:44:19.563 LINK scsi_nvme_ut 00:44:21.470 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:44:21.470 LINK blobfs_sync_ut 00:44:21.730 CC test/nvme/reserve/reserve.o 00:44:22.667 LINK reserve 00:44:24.045 LINK reactor_ut 00:44:24.045 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:44:24.045 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:44:25.422 LINK gpt_ut 00:44:25.422 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:44:25.681 CC test/nvme/simple_copy/simple_copy.o 00:44:27.055 LINK simple_copy 00:44:27.625 LINK vbdev_lvol_ut 00:44:29.567 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:44:31.552 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:44:33.455 LINK bdev_raid_sb_ut 00:44:34.834 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:44:36.214 LINK bdev_raid_ut 00:44:36.214 LINK bdev_ut 00:44:36.473 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:44:36.732 LINK concat_ut 00:44:36.990 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:44:37.927 LINK bdev_zone_ut 00:44:38.495 LINK blobfs_bdev_ut 00:44:39.063 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:44:42.354 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:44:42.614 LINK vbdev_zone_block_ut 00:44:45.150 CC test/nvme/connect_stress/connect_stress.o 00:44:46.085 LINK connect_stress 00:44:47.462 CC test/nvme/boot_partition/boot_partition.o 00:44:47.462 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:44:48.401 LINK boot_partition 00:44:49.338 LINK raid1_ut 00:44:51.871 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:44:53.249 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:44:55.154 LINK ioat_ut 00:44:55.721 LINK bdev_nvme_ut 00:44:55.980 LINK raid5f_ut 00:44:57.357 CC test/nvme/compliance/nvme_compliance.o 00:44:58.295 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:44:59.229 LINK nvme_compliance 00:44:59.794 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:45:03.978 LINK conn_ut 00:45:08.226 LINK json_parse_ut 00:45:09.615 CC test/nvme/fused_ordering/fused_ordering.o 00:45:10.551 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:45:10.551 LINK fused_ordering 00:45:13.086 LINK init_grp_ut 00:45:15.619 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:45:15.619 CC test/unit/lib/iscsi/param.c/param_ut.o 00:45:17.520 LINK param_ut 00:45:18.088 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:45:18.347 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:45:20.250 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:45:20.509 LINK portal_grp_ut 00:45:21.076 LINK tgt_node_ut 00:45:21.644 LINK iscsi_ut 00:45:21.903 LINK json_util_ut 00:45:27.175 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:45:30.460 LINK json_write_ut 00:45:30.460 CC test/nvme/doorbell_aers/doorbell_aers.o 00:45:30.460 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:45:30.719 CC test/unit/lib/log/log.c/log_ut.o 00:45:30.719 LINK doorbell_aers 00:45:30.978 CC test/nvme/fdp/fdp.o 00:45:31.236 LINK jsonrpc_server_ut 00:45:31.804 LINK log_ut 00:45:32.063 LINK fdp 00:45:33.965 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:45:34.901 CC test/unit/lib/notify/notify.c/notify_ut.o 00:45:35.468 CC test/nvme/cuse/cuse.o 00:45:35.726 LINK notify_ut 00:45:35.984 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:45:38.515 LINK lvol_ut 00:45:39.083 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:45:39.650 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:45:39.650 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:45:40.218 LINK cuse 00:45:41.594 LINK nvme_ut 00:45:44.126 LINK nvme_ctrlr_cmd_ut 00:45:46.728 LINK tcp_ut 00:45:46.995 LINK nvme_ctrlr_ut 00:45:46.995 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:45:47.931 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:45:51.216 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:45:51.216 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:45:51.474 LINK nvme_ctrlr_ocssd_cmd_ut 00:45:51.733 CC test/unit/lib/sock/sock.c/sock_ut.o 00:45:52.301 LINK nvme_ns_ut 00:45:52.868 LINK dev_ut 00:45:55.402 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:45:55.661 LINK sock_ut 00:45:56.227 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:45:57.603 LINK nvme_ns_cmd_ut 00:45:57.603 LINK lun_ut 00:45:57.603 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:45:58.538 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:45:58.797 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:45:59.368 LINK nvme_ns_ocssd_cmd_ut 00:46:00.305 LINK nvme_poll_group_ut 00:46:00.305 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:46:00.564 LINK nvme_pcie_ut 00:46:00.564 CC test/unit/lib/sock/posix.c/posix_ut.o 00:46:00.823 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:46:01.391 LINK ctrlr_ut 00:46:01.391 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:46:01.391 LINK posix_ut 00:46:01.391 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:46:01.650 LINK subsystem_ut 00:46:01.651 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:46:01.651 LINK scsi_ut 00:46:03.028 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:46:03.287 LINK ctrlr_discovery_ut 00:46:03.546 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:46:04.115 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:46:04.115 LINK nvme_quirks_ut 00:46:04.373 LINK nvme_qpair_ut 00:46:04.633 LINK ctrlr_bdev_ut 00:46:04.633 CC test/unit/lib/thread/thread.c/thread_ut.o 00:46:05.201 LINK scsi_bdev_ut 00:46:05.771 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:46:06.337 LINK iobuf_ut 00:46:06.337 LINK thread_ut 00:46:06.595 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:46:07.163 LINK nvme_tcp_ut 00:46:07.423 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:46:07.423 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:46:07.682 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:46:08.250 LINK nvmf_ut 00:46:09.187 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:46:09.187 LINK nvme_transport_ut 00:46:09.446 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:46:09.705 LINK scsi_pr_ut 00:46:09.706 LINK nvme_io_msg_ut 00:46:09.965 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:46:10.533 CC test/unit/lib/util/base64.c/base64_ut.o 00:46:10.792 LINK base64_ut 00:46:10.792 LINK rdma_ut 00:46:11.051 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:46:11.310 LINK nvme_pcie_common_ut 00:46:11.310 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:46:11.569 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:46:11.828 LINK pci_event_ut 00:46:12.086 LINK transport_ut 00:46:12.086 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:46:12.086 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:46:12.086 LINK subsystem_ut 00:46:12.654 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:46:12.654 LINK rpc_ut 00:46:12.654 LINK bit_array_ut 00:46:12.913 LINK nvme_fabric_ut 00:46:13.480 LINK idxd_user_ut 00:46:13.738 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:46:15.643 CC test/unit/lib/rdma/common.c/common_ut.o 00:46:15.901 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:46:16.158 LINK common_ut 00:46:16.158 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:46:16.158 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:46:16.158 LINK cpuset_ut 00:46:16.158 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:46:16.417 LINK crc16_ut 00:46:16.417 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:46:16.417 LINK crc32_ieee_ut 00:46:16.728 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:46:16.728 LINK vhost_ut 00:46:16.728 LINK nvme_opal_ut 00:46:16.728 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:46:16.728 LINK crc32c_ut 00:46:16.987 LINK crc64_ut 00:46:16.987 LINK idxd_ut 00:46:16.987 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:46:16.987 CC test/unit/lib/util/dif.c/dif_ut.o 00:46:17.246 CC test/unit/lib/util/iov.c/iov_ut.o 00:46:17.246 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:46:17.505 CC test/unit/lib/util/math.c/math_ut.o 00:46:17.505 LINK ftl_l2p_ut 00:46:17.505 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:46:17.505 LINK iov_ut 00:46:17.764 LINK math_ut 00:46:18.023 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:46:18.281 LINK pipe_ut 00:46:18.281 LINK dif_ut 00:46:18.281 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:46:18.540 CC test/unit/lib/util/string.c/string_ut.o 00:46:18.800 CC test/unit/lib/util/xor.c/xor_ut.o 00:46:19.059 LINK string_ut 00:46:19.059 LINK xor_ut 00:46:19.318 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:46:19.318 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:46:19.577 LINK ftl_band_ut 00:46:19.577 LINK nvme_rdma_ut 00:46:19.577 LINK nvme_cuse_ut 00:46:19.577 LINK ftl_bitmap_ut 00:46:19.837 LINK ftl_io_ut 00:46:20.404 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:46:20.664 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:46:20.923 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:46:20.923 LINK ftl_mempool_ut 00:46:21.182 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:46:21.750 LINK ftl_mngt_ut 00:46:22.010 LINK ftl_layout_upgrade_ut 00:46:22.010 LINK ftl_sb_ut 00:47:43.459 05:27:53 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:47:43.459 make[1]: Nothing to be done for 'clean'. 00:47:43.459 05:27:57 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:47:43.459 05:27:57 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:47:43.459 05:27:57 -- common/autotest_common.sh@10 -- $ set +x 00:47:43.459 05:27:57 -- spdk/autopackage.sh@48 -- $ timing_finish 00:47:43.459 05:27:57 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:43.459 05:27:57 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:47:43.459 05:27:57 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:47:43.459 + [[ -n 2378 ]] 00:47:43.459 + sudo kill 2378 00:47:43.469 [Pipeline] } 00:47:43.484 [Pipeline] // timeout 00:47:43.489 [Pipeline] } 00:47:43.503 [Pipeline] // stage 00:47:43.508 [Pipeline] } 00:47:43.522 [Pipeline] // catchError 00:47:43.531 [Pipeline] stage 00:47:43.533 [Pipeline] { (Stop VM) 00:47:43.545 [Pipeline] sh 00:47:43.827 + vagrant halt 00:47:47.115 ==> default: Halting domain... 00:47:52.399 [Pipeline] sh 00:47:52.681 + vagrant destroy -f 00:47:55.217 ==> default: Removing domain... 00:47:56.163 [Pipeline] sh 00:47:56.443 + mv output /var/jenkins/workspace/ubuntu24-vg-autotest/output 00:47:56.452 [Pipeline] } 00:47:56.466 [Pipeline] // stage 00:47:56.471 [Pipeline] } 00:47:56.486 [Pipeline] // dir 00:47:56.491 [Pipeline] } 00:47:56.505 [Pipeline] // wrap 00:47:56.511 [Pipeline] } 00:47:56.523 [Pipeline] // catchError 00:47:56.533 [Pipeline] stage 00:47:56.535 [Pipeline] { (Epilogue) 00:47:56.548 [Pipeline] sh 00:47:56.830 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:11.821 [Pipeline] catchError 00:48:11.824 [Pipeline] { 00:48:11.837 [Pipeline] sh 00:48:12.121 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:12.121 Artifacts sizes are good 00:48:12.131 [Pipeline] } 00:48:12.144 [Pipeline] // catchError 00:48:12.154 [Pipeline] archiveArtifacts 00:48:12.161 Archiving artifacts 00:48:12.410 [Pipeline] cleanWs 00:48:12.427 [WS-CLEANUP] Deleting project workspace... 00:48:12.427 [WS-CLEANUP] Deferred wipeout is used... 00:48:12.433 [WS-CLEANUP] done 00:48:12.435 [Pipeline] } 00:48:12.450 [Pipeline] // stage 00:48:12.456 [Pipeline] } 00:48:12.470 [Pipeline] // node 00:48:12.476 [Pipeline] End of Pipeline 00:48:12.520 Finished: SUCCESS